00:00:00.000 Started by upstream project "spdk-dpdk-per-patch" build number 271 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.060 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.085 Using shallow fetch with depth 1 00:00:00.085 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.085 > git --version # timeout=10 00:00:00.111 > git --version # 'git version 2.39.2' 00:00:00.111 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.146 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.146 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.305 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.318 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.330 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:02.330 > git config core.sparsecheckout # timeout=10 00:00:02.341 > git read-tree -mu HEAD # timeout=10 00:00:02.358 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:02.379 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:02.379 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:02.468 [Pipeline] Start of Pipeline 00:00:02.484 [Pipeline] library 00:00:02.486 Loading library shm_lib@master 00:00:02.486 Library shm_lib@master is cached. Copying from home. 00:00:02.506 [Pipeline] node 00:00:02.520 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.522 [Pipeline] { 00:00:02.532 [Pipeline] catchError 00:00:02.533 [Pipeline] { 00:00:02.544 [Pipeline] wrap 00:00:02.552 [Pipeline] { 00:00:02.561 [Pipeline] stage 00:00:02.563 [Pipeline] { (Prologue) 00:00:02.806 [Pipeline] sh 00:00:03.098 + logger -p user.info -t JENKINS-CI 00:00:03.124 [Pipeline] echo 00:00:03.127 Node: WFP39 00:00:03.133 [Pipeline] sh 00:00:03.424 [Pipeline] setCustomBuildProperty 00:00:03.438 [Pipeline] echo 00:00:03.439 Cleanup processes 00:00:03.446 [Pipeline] sh 00:00:03.728 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.728 3685041 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.739 [Pipeline] sh 00:00:04.015 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.015 ++ grep -v 'sudo pgrep' 00:00:04.015 ++ awk '{print $1}' 00:00:04.016 + sudo kill -9 00:00:04.016 + true 00:00:04.028 [Pipeline] cleanWs 00:00:04.039 [WS-CLEANUP] Deleting project workspace... 00:00:04.039 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.045 [WS-CLEANUP] done 00:00:04.048 [Pipeline] setCustomBuildProperty 00:00:04.059 [Pipeline] sh 00:00:04.336 + sudo git config --global --replace-all safe.directory '*' 00:00:04.403 [Pipeline] httpRequest 00:00:04.423 [Pipeline] echo 00:00:04.424 Sorcerer 10.211.164.101 is alive 00:00:04.430 [Pipeline] httpRequest 00:00:04.433 HttpMethod: GET 00:00:04.433 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:04.434 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:04.435 Response Code: HTTP/1.1 200 OK 00:00:04.436 Success: Status code 200 is in the accepted range: 200,404 00:00:04.436 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.249 [Pipeline] sh 00:00:05.529 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.540 [Pipeline] httpRequest 00:00:05.565 [Pipeline] echo 00:00:05.565 Sorcerer 10.211.164.101 is alive 00:00:05.570 [Pipeline] httpRequest 00:00:05.574 HttpMethod: GET 00:00:05.574 URL: http://10.211.164.101/packages/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:05.575 Sending request to url: http://10.211.164.101/packages/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:05.587 Response Code: HTTP/1.1 200 OK 00:00:05.587 Success: Status code 200 is in the accepted range: 200,404 00:00:05.588 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:01:00.494 [Pipeline] sh 00:01:00.778 + tar --no-same-owner -xf spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:01:04.987 [Pipeline] sh 00:01:05.268 + git -C spdk log --oneline -n5 00:01:05.268 89fd17309 bdev/raid: add qos for raid process 00:01:05.268 9645ea138 util: move module/sock/sock_kernel.h contents to net.c 00:01:05.268 e8671c893 util: add spdk_net_get_interface_name 00:01:05.268 7798a2572 scripts/nvmf_perf: set all NIC RX queues at once 00:01:05.268 986fe0958 scripts/nvmf_perf: indent multi-line strings 00:01:05.281 [Pipeline] sh 00:01:05.564 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/75/24275/1 00:01:06.943 From https://review.spdk.io/gerrit/spdk/dpdk 00:01:06.943 * branch refs/changes/75/24275/1 -> FETCH_HEAD 00:01:06.954 [Pipeline] sh 00:01:07.235 + git -C spdk/dpdk checkout FETCH_HEAD 00:01:07.802 Previous HEAD position was 08f3a46de7 pmdinfogen: avoid empty string in ELFSymbol() 00:01:07.802 HEAD is now at 6766bde469 eal/alarm_cancel: Fix thread starvation 00:01:07.811 [Pipeline] } 00:01:07.828 [Pipeline] // stage 00:01:07.837 [Pipeline] stage 00:01:07.839 [Pipeline] { (Prepare) 00:01:07.858 [Pipeline] writeFile 00:01:07.875 [Pipeline] sh 00:01:08.157 + logger -p user.info -t JENKINS-CI 00:01:08.170 [Pipeline] sh 00:01:08.450 + logger -p user.info -t JENKINS-CI 00:01:08.463 [Pipeline] sh 00:01:08.744 + cat autorun-spdk.conf 00:01:08.744 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.744 SPDK_TEST_FUZZER_SHORT=1 00:01:08.744 SPDK_TEST_FUZZER=1 00:01:08.744 SPDK_RUN_UBSAN=1 00:01:08.751 RUN_NIGHTLY= 00:01:08.756 [Pipeline] readFile 00:01:08.783 [Pipeline] withEnv 00:01:08.785 [Pipeline] { 00:01:08.799 [Pipeline] sh 00:01:09.083 + set -ex 00:01:09.083 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:09.083 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:09.083 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.083 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:09.083 ++ SPDK_TEST_FUZZER=1 00:01:09.083 ++ SPDK_RUN_UBSAN=1 00:01:09.083 ++ RUN_NIGHTLY= 00:01:09.083 + case $SPDK_TEST_NVMF_NICS in 00:01:09.083 + DRIVERS= 00:01:09.083 + [[ -n '' ]] 00:01:09.083 + exit 0 00:01:09.093 [Pipeline] } 00:01:09.111 [Pipeline] // withEnv 00:01:09.115 [Pipeline] } 00:01:09.131 [Pipeline] // stage 00:01:09.159 [Pipeline] catchError 00:01:09.161 [Pipeline] { 00:01:09.177 [Pipeline] timeout 00:01:09.177 Timeout set to expire in 30 min 00:01:09.179 [Pipeline] { 00:01:09.194 [Pipeline] stage 00:01:09.195 [Pipeline] { (Tests) 00:01:09.209 [Pipeline] sh 00:01:09.487 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:09.487 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:09.487 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:09.487 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:09.487 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:09.487 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:09.487 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:09.487 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:09.487 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:09.487 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:09.487 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:09.487 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:09.487 + source /etc/os-release 00:01:09.487 ++ NAME='Fedora Linux' 00:01:09.487 ++ VERSION='38 (Cloud Edition)' 00:01:09.487 ++ ID=fedora 00:01:09.487 ++ VERSION_ID=38 00:01:09.487 ++ VERSION_CODENAME= 00:01:09.487 ++ PLATFORM_ID=platform:f38 00:01:09.487 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.487 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.487 ++ LOGO=fedora-logo-icon 00:01:09.487 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.487 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.487 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.487 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.487 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.487 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.487 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.487 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.487 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.487 ++ SUPPORT_END=2024-05-14 00:01:09.487 ++ VARIANT='Cloud Edition' 00:01:09.487 ++ VARIANT_ID=cloud 00:01:09.487 + uname -a 00:01:09.487 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:01:09.487 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:13.673 Hugepages 00:01:13.674 node hugesize free / total 00:01:13.674 node0 1048576kB 0 / 0 00:01:13.674 node0 2048kB 0 / 0 00:01:13.674 node1 1048576kB 0 / 0 00:01:13.674 node1 2048kB 0 / 0 00:01:13.674 00:01:13.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.674 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:13.674 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:13.674 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:13.674 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:13.674 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:13.674 + rm -f /tmp/spdk-ld-path 00:01:13.674 + source autorun-spdk.conf 00:01:13.674 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.674 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:13.674 ++ SPDK_TEST_FUZZER=1 00:01:13.674 ++ SPDK_RUN_UBSAN=1 00:01:13.674 ++ RUN_NIGHTLY= 00:01:13.674 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.674 + [[ -n '' ]] 00:01:13.674 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:13.674 + for M in /var/spdk/build-*-manifest.txt 00:01:13.674 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.674 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:13.674 + for M in /var/spdk/build-*-manifest.txt 00:01:13.674 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.674 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:13.674 ++ uname 00:01:13.674 + [[ Linux == \L\i\n\u\x ]] 00:01:13.674 + sudo dmesg -T 00:01:13.674 + sudo dmesg --clear 00:01:13.674 + dmesg_pid=3686146 00:01:13.674 + [[ Fedora Linux == FreeBSD ]] 00:01:13.674 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.674 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.674 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.674 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.674 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.674 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.674 + sudo dmesg -Tw 00:01:13.674 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.674 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.674 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.674 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.674 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.674 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.674 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.674 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.674 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:13.674 Test configuration: 00:01:13.674 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.674 SPDK_TEST_FUZZER_SHORT=1 00:01:13.674 SPDK_TEST_FUZZER=1 00:01:13.674 SPDK_RUN_UBSAN=1 00:01:13.674 RUN_NIGHTLY= 18:16:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:13.674 18:16:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.674 18:16:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.674 18:16:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.674 18:16:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.674 18:16:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.674 18:16:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.674 18:16:31 -- paths/export.sh@5 -- $ export PATH 00:01:13.674 18:16:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.674 18:16:31 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:13.674 18:16:31 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:13.674 18:16:31 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721578591.XXXXXX 00:01:13.674 18:16:31 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721578591.yieQRf 00:01:13.674 18:16:31 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:13.674 18:16:31 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:13.674 18:16:31 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:13.674 18:16:31 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.674 18:16:31 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.674 18:16:31 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:13.674 18:16:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.674 18:16:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.674 18:16:31 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.674 18:16:31 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:13.674 18:16:31 -- pm/common@17 -- $ local monitor 00:01:13.674 18:16:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.674 18:16:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.674 18:16:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.674 18:16:31 -- pm/common@21 -- $ date +%s 00:01:13.674 18:16:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.674 18:16:31 -- pm/common@21 -- $ date +%s 00:01:13.674 18:16:31 -- pm/common@25 -- $ sleep 1 00:01:13.674 18:16:31 -- pm/common@21 -- $ date +%s 00:01:13.674 18:16:31 -- pm/common@21 -- $ date +%s 00:01:13.674 18:16:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721578591 00:01:13.674 18:16:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721578591 00:01:13.674 18:16:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721578591 00:01:13.674 18:16:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721578591 00:01:13.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721578591_collect-vmstat.pm.log 00:01:13.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721578591_collect-cpu-load.pm.log 00:01:13.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721578591_collect-cpu-temp.pm.log 00:01:13.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721578591_collect-bmc-pm.bmc.pm.log 00:01:14.612 18:16:32 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:14.612 18:16:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.612 18:16:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.612 18:16:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:14.612 18:16:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.612 Sun Jul 21 04:16:32 PM UTC 2024 00:01:14.612 18:16:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.612 v24.09-pre-254-g89fd17309 00:01:14.612 18:16:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.612 18:16:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.612 18:16:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.612 18:16:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.612 18:16:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.612 18:16:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.612 ************************************ 00:01:14.612 START TEST ubsan 00:01:14.612 ************************************ 00:01:14.612 18:16:32 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:14.612 using ubsan 00:01:14.612 00:01:14.612 real 0m0.001s 00:01:14.612 user 0m0.000s 00:01:14.612 sys 0m0.001s 00:01:14.612 18:16:32 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.612 18:16:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.612 ************************************ 00:01:14.612 END TEST ubsan 00:01:14.612 ************************************ 00:01:14.612 18:16:32 -- common/autotest_common.sh@1142 -- $ return 0 00:01:14.612 18:16:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.612 18:16:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.612 18:16:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.612 18:16:32 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:14.612 18:16:32 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:14.612 18:16:32 -- common/autobuild_common.sh@435 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:14.612 18:16:32 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:14.612 18:16:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.612 18:16:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.872 ************************************ 00:01:14.872 START TEST autobuild_llvm_precompile 00:01:14.872 ************************************ 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:14.872 Target: x86_64-redhat-linux-gnu 00:01:14.872 Thread model: posix 00:01:14.872 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:14.872 18:16:32 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:15.130 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:15.130 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:15.697 Using 'verbs' RDMA provider 00:01:31.525 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.480 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:46.739 Creating mk/config.mk...done. 00:01:46.739 Creating mk/cc.flags.mk...done. 00:01:46.739 Type 'make' to build. 00:01:46.739 00:01:46.739 real 0m32.089s 00:01:46.739 user 0m14.819s 00:01:46.739 sys 0m16.655s 00:01:46.739 18:17:04 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:46.739 18:17:04 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:46.739 ************************************ 00:01:46.739 END TEST autobuild_llvm_precompile 00:01:46.739 ************************************ 00:01:46.999 18:17:04 -- common/autotest_common.sh@1142 -- $ return 0 00:01:46.999 18:17:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:46.999 18:17:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:46.999 18:17:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:46.999 18:17:04 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:46.999 18:17:04 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:47.258 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:47.258 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:47.827 Using 'verbs' RDMA provider 00:02:03.651 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:15.859 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:15.859 Creating mk/config.mk...done. 00:02:15.859 Creating mk/cc.flags.mk...done. 00:02:15.859 Type 'make' to build. 00:02:15.859 18:17:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:02:15.859 18:17:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:15.859 18:17:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:15.859 18:17:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.859 ************************************ 00:02:15.859 START TEST make 00:02:15.859 ************************************ 00:02:15.859 18:17:32 make -- common/autotest_common.sh@1123 -- $ make -j72 00:02:15.859 make[1]: Nothing to be done for 'all'. 00:02:17.244 The Meson build system 00:02:17.244 Version: 1.3.1 00:02:17.244 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:17.244 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.244 Build type: native build 00:02:17.244 Project name: libvfio-user 00:02:17.244 Project version: 0.0.1 00:02:17.244 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:17.244 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:17.244 Host machine cpu family: x86_64 00:02:17.244 Host machine cpu: x86_64 00:02:17.244 Run-time dependency threads found: YES 00:02:17.244 Library dl found: YES 00:02:17.244 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:17.244 Run-time dependency json-c found: YES 0.17 00:02:17.244 Run-time dependency cmocka found: YES 1.1.7 00:02:17.244 Program pytest-3 found: NO 00:02:17.244 Program flake8 found: NO 00:02:17.244 Program misspell-fixer found: NO 00:02:17.244 Program restructuredtext-lint found: NO 00:02:17.244 Program valgrind found: YES (/usr/bin/valgrind) 00:02:17.244 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.244 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.244 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.244 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:17.244 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:17.244 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:17.244 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:17.244 Build targets in project: 8 00:02:17.244 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:17.244 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:17.244 00:02:17.244 libvfio-user 0.0.1 00:02:17.244 00:02:17.244 User defined options 00:02:17.244 buildtype : debug 00:02:17.244 default_library: static 00:02:17.244 libdir : /usr/local/lib 00:02:17.244 00:02:17.244 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.502 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:17.760 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:17.760 [2/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:17.760 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:17.760 [4/36] Compiling C object samples/null.p/null.c.o 00:02:17.760 [5/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:17.760 [6/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:17.760 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:17.760 [8/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:17.760 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:17.760 [10/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:17.760 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:17.760 [12/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:17.760 [13/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:17.760 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:17.760 [15/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:17.760 [16/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:17.760 [17/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:17.760 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:17.760 [19/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:17.760 [20/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:17.760 [21/36] Compiling C object samples/server.p/server.c.o 00:02:17.760 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:17.760 [23/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:17.760 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:17.760 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:17.760 [26/36] Compiling C object samples/client.p/client.c.o 00:02:17.760 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:17.760 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:17.760 [29/36] Linking static target lib/libvfio-user.a 00:02:17.760 [30/36] Linking target samples/client 00:02:17.760 [31/36] Linking target test/unit_tests 00:02:17.760 [32/36] Linking target samples/lspci 00:02:17.760 [33/36] Linking target samples/server 00:02:17.760 [34/36] Linking target samples/shadow_ioeventfd_server 00:02:18.018 [35/36] Linking target samples/null 00:02:18.018 [36/36] Linking target samples/gpio-pci-idio-16 00:02:18.018 INFO: autodetecting backend as ninja 00:02:18.018 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.018 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.276 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:18.276 ninja: no work to do. 00:02:24.838 The Meson build system 00:02:24.838 Version: 1.3.1 00:02:24.838 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:24.839 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:24.839 Build type: native build 00:02:24.839 Program cat found: YES (/usr/bin/cat) 00:02:24.839 Project name: DPDK 00:02:24.839 Project version: 24.03.0 00:02:24.839 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:24.839 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:24.839 Host machine cpu family: x86_64 00:02:24.839 Host machine cpu: x86_64 00:02:24.839 Message: ## Building in Developer Mode ## 00:02:24.839 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.839 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:24.839 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.839 Program python3 found: YES (/usr/bin/python3) 00:02:24.839 Program cat found: YES (/usr/bin/cat) 00:02:24.839 Compiler for C supports arguments -march=native: YES 00:02:24.839 Checking for size of "void *" : 8 00:02:24.839 Checking for size of "void *" : 8 (cached) 00:02:24.839 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:24.839 Library m found: YES 00:02:24.839 Library numa found: YES 00:02:24.839 Has header "numaif.h" : YES 00:02:24.839 Library fdt found: NO 00:02:24.839 Library execinfo found: NO 00:02:24.839 Has header "execinfo.h" : YES 00:02:24.839 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:24.839 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.839 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.839 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.839 Run-time dependency openssl found: YES 3.0.9 00:02:24.839 Run-time dependency libpcap found: YES 1.10.4 00:02:24.839 Has header "pcap.h" with dependency libpcap: YES 00:02:24.839 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.839 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.839 Compiler for C supports arguments -Wformat: YES 00:02:24.839 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:24.839 Compiler for C supports arguments -Wformat-security: YES 00:02:24.839 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.839 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.839 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.839 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.839 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.839 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.839 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.839 Compiler for C supports arguments -Wundef: YES 00:02:24.839 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.839 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.839 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:24.839 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.839 Program objdump found: YES (/usr/bin/objdump) 00:02:24.839 Compiler for C supports arguments -mavx512f: YES 00:02:24.839 Checking if "AVX512 checking" compiles: YES 00:02:24.839 Fetching value of define "__SSE4_2__" : 1 00:02:24.839 Fetching value of define "__AES__" : 1 00:02:24.839 Fetching value of define "__AVX__" : 1 00:02:24.839 Fetching value of define "__AVX2__" : 1 00:02:24.839 Fetching value of define "__AVX512BW__" : 1 00:02:24.839 Fetching value of define "__AVX512CD__" : 1 00:02:24.839 Fetching value of define "__AVX512DQ__" : 1 00:02:24.839 Fetching value of define "__AVX512F__" : 1 00:02:24.839 Fetching value of define "__AVX512VL__" : 1 00:02:24.839 Fetching value of define "__PCLMUL__" : 1 00:02:24.839 Fetching value of define "__RDRND__" : 1 00:02:24.839 Fetching value of define "__RDSEED__" : 1 00:02:24.839 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:24.839 Fetching value of define "__znver1__" : (undefined) 00:02:24.839 Fetching value of define "__znver2__" : (undefined) 00:02:24.839 Fetching value of define "__znver3__" : (undefined) 00:02:24.839 Fetching value of define "__znver4__" : (undefined) 00:02:24.839 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:24.839 Message: lib/log: Defining dependency "log" 00:02:24.839 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.839 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.839 Checking for function "getentropy" : NO 00:02:24.839 Message: lib/eal: Defining dependency "eal" 00:02:24.839 Message: lib/ring: Defining dependency "ring" 00:02:24.839 Message: lib/rcu: Defining dependency "rcu" 00:02:24.839 Message: lib/mempool: Defining dependency "mempool" 00:02:24.839 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.839 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.839 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:24.839 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:24.839 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:24.839 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:24.839 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:24.839 Compiler for C supports arguments -mpclmul: YES 00:02:24.839 Compiler for C supports arguments -maes: YES 00:02:24.839 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.839 Compiler for C supports arguments -mavx512bw: YES 00:02:24.839 Compiler for C supports arguments -mavx512dq: YES 00:02:24.839 Compiler for C supports arguments -mavx512vl: YES 00:02:24.839 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.839 Compiler for C supports arguments -mavx2: YES 00:02:24.839 Compiler for C supports arguments -mavx: YES 00:02:24.839 Message: lib/net: Defining dependency "net" 00:02:24.839 Message: lib/meter: Defining dependency "meter" 00:02:24.839 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.839 Message: lib/pci: Defining dependency "pci" 00:02:24.839 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.839 Message: lib/hash: Defining dependency "hash" 00:02:24.839 Message: lib/timer: Defining dependency "timer" 00:02:24.839 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.839 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.839 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.839 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.839 Message: lib/power: Defining dependency "power" 00:02:24.839 Message: lib/reorder: Defining dependency "reorder" 00:02:24.839 Message: lib/security: Defining dependency "security" 00:02:24.839 Has header "linux/userfaultfd.h" : YES 00:02:24.839 Has header "linux/vduse.h" : YES 00:02:24.839 Message: lib/vhost: Defining dependency "vhost" 00:02:24.839 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:24.839 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.839 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.839 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.839 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:24.839 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:24.839 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:24.839 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:24.839 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:24.839 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:24.839 Program doxygen found: YES (/usr/bin/doxygen) 00:02:24.839 Configuring doxy-api-html.conf using configuration 00:02:24.839 Configuring doxy-api-man.conf using configuration 00:02:24.839 Program mandb found: YES (/usr/bin/mandb) 00:02:24.839 Program sphinx-build found: NO 00:02:24.839 Configuring rte_build_config.h using configuration 00:02:24.839 Message: 00:02:24.839 ================= 00:02:24.839 Applications Enabled 00:02:24.839 ================= 00:02:24.839 00:02:24.839 apps: 00:02:24.839 00:02:24.839 00:02:24.839 Message: 00:02:24.839 ================= 00:02:24.839 Libraries Enabled 00:02:24.839 ================= 00:02:24.839 00:02:24.839 libs: 00:02:24.839 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.839 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:24.839 cryptodev, dmadev, power, reorder, security, vhost, 00:02:24.839 00:02:24.839 Message: 00:02:24.839 =============== 00:02:24.839 Drivers Enabled 00:02:24.839 =============== 00:02:24.839 00:02:24.839 common: 00:02:24.839 00:02:24.839 bus: 00:02:24.839 pci, vdev, 00:02:24.839 mempool: 00:02:24.839 ring, 00:02:24.839 dma: 00:02:24.839 00:02:24.839 net: 00:02:24.839 00:02:24.839 crypto: 00:02:24.839 00:02:24.839 compress: 00:02:24.839 00:02:24.839 vdpa: 00:02:24.839 00:02:24.839 00:02:24.839 Message: 00:02:24.839 ================= 00:02:24.839 Content Skipped 00:02:24.839 ================= 00:02:24.839 00:02:24.839 apps: 00:02:24.839 dumpcap: explicitly disabled via build config 00:02:24.839 graph: explicitly disabled via build config 00:02:24.839 pdump: explicitly disabled via build config 00:02:24.839 proc-info: explicitly disabled via build config 00:02:24.839 test-acl: explicitly disabled via build config 00:02:24.839 test-bbdev: explicitly disabled via build config 00:02:24.839 test-cmdline: explicitly disabled via build config 00:02:24.839 test-compress-perf: explicitly disabled via build config 00:02:24.839 test-crypto-perf: explicitly disabled via build config 00:02:24.839 test-dma-perf: explicitly disabled via build config 00:02:24.839 test-eventdev: explicitly disabled via build config 00:02:24.839 test-fib: explicitly disabled via build config 00:02:24.839 test-flow-perf: explicitly disabled via build config 00:02:24.839 test-gpudev: explicitly disabled via build config 00:02:24.839 test-mldev: explicitly disabled via build config 00:02:24.839 test-pipeline: explicitly disabled via build config 00:02:24.839 test-pmd: explicitly disabled via build config 00:02:24.839 test-regex: explicitly disabled via build config 00:02:24.839 test-sad: explicitly disabled via build config 00:02:24.839 test-security-perf: explicitly disabled via build config 00:02:24.839 00:02:24.839 libs: 00:02:24.839 argparse: explicitly disabled via build config 00:02:24.839 metrics: explicitly disabled via build config 00:02:24.839 acl: explicitly disabled via build config 00:02:24.839 bbdev: explicitly disabled via build config 00:02:24.839 bitratestats: explicitly disabled via build config 00:02:24.839 bpf: explicitly disabled via build config 00:02:24.839 cfgfile: explicitly disabled via build config 00:02:24.839 distributor: explicitly disabled via build config 00:02:24.839 efd: explicitly disabled via build config 00:02:24.839 eventdev: explicitly disabled via build config 00:02:24.839 dispatcher: explicitly disabled via build config 00:02:24.839 gpudev: explicitly disabled via build config 00:02:24.839 gro: explicitly disabled via build config 00:02:24.839 gso: explicitly disabled via build config 00:02:24.840 ip_frag: explicitly disabled via build config 00:02:24.840 jobstats: explicitly disabled via build config 00:02:24.840 latencystats: explicitly disabled via build config 00:02:24.840 lpm: explicitly disabled via build config 00:02:24.840 member: explicitly disabled via build config 00:02:24.840 pcapng: explicitly disabled via build config 00:02:24.840 rawdev: explicitly disabled via build config 00:02:24.840 regexdev: explicitly disabled via build config 00:02:24.840 mldev: explicitly disabled via build config 00:02:24.840 rib: explicitly disabled via build config 00:02:24.840 sched: explicitly disabled via build config 00:02:24.840 stack: explicitly disabled via build config 00:02:24.840 ipsec: explicitly disabled via build config 00:02:24.840 pdcp: explicitly disabled via build config 00:02:24.840 fib: explicitly disabled via build config 00:02:24.840 port: explicitly disabled via build config 00:02:24.840 pdump: explicitly disabled via build config 00:02:24.840 table: explicitly disabled via build config 00:02:24.840 pipeline: explicitly disabled via build config 00:02:24.840 graph: explicitly disabled via build config 00:02:24.840 node: explicitly disabled via build config 00:02:24.840 00:02:24.840 drivers: 00:02:24.840 common/cpt: not in enabled drivers build config 00:02:24.840 common/dpaax: not in enabled drivers build config 00:02:24.840 common/iavf: not in enabled drivers build config 00:02:24.840 common/idpf: not in enabled drivers build config 00:02:24.840 common/ionic: not in enabled drivers build config 00:02:24.840 common/mvep: not in enabled drivers build config 00:02:24.840 common/octeontx: not in enabled drivers build config 00:02:24.840 bus/auxiliary: not in enabled drivers build config 00:02:24.840 bus/cdx: not in enabled drivers build config 00:02:24.840 bus/dpaa: not in enabled drivers build config 00:02:24.840 bus/fslmc: not in enabled drivers build config 00:02:24.840 bus/ifpga: not in enabled drivers build config 00:02:24.840 bus/platform: not in enabled drivers build config 00:02:24.840 bus/uacce: not in enabled drivers build config 00:02:24.840 bus/vmbus: not in enabled drivers build config 00:02:24.840 common/cnxk: not in enabled drivers build config 00:02:24.840 common/mlx5: not in enabled drivers build config 00:02:24.840 common/nfp: not in enabled drivers build config 00:02:24.840 common/nitrox: not in enabled drivers build config 00:02:24.840 common/qat: not in enabled drivers build config 00:02:24.840 common/sfc_efx: not in enabled drivers build config 00:02:24.840 mempool/bucket: not in enabled drivers build config 00:02:24.840 mempool/cnxk: not in enabled drivers build config 00:02:24.840 mempool/dpaa: not in enabled drivers build config 00:02:24.840 mempool/dpaa2: not in enabled drivers build config 00:02:24.840 mempool/octeontx: not in enabled drivers build config 00:02:24.840 mempool/stack: not in enabled drivers build config 00:02:24.840 dma/cnxk: not in enabled drivers build config 00:02:24.840 dma/dpaa: not in enabled drivers build config 00:02:24.840 dma/dpaa2: not in enabled drivers build config 00:02:24.840 dma/hisilicon: not in enabled drivers build config 00:02:24.840 dma/idxd: not in enabled drivers build config 00:02:24.840 dma/ioat: not in enabled drivers build config 00:02:24.840 dma/skeleton: not in enabled drivers build config 00:02:24.840 net/af_packet: not in enabled drivers build config 00:02:24.840 net/af_xdp: not in enabled drivers build config 00:02:24.840 net/ark: not in enabled drivers build config 00:02:24.840 net/atlantic: not in enabled drivers build config 00:02:24.840 net/avp: not in enabled drivers build config 00:02:24.840 net/axgbe: not in enabled drivers build config 00:02:24.840 net/bnx2x: not in enabled drivers build config 00:02:24.840 net/bnxt: not in enabled drivers build config 00:02:24.840 net/bonding: not in enabled drivers build config 00:02:24.840 net/cnxk: not in enabled drivers build config 00:02:24.840 net/cpfl: not in enabled drivers build config 00:02:24.840 net/cxgbe: not in enabled drivers build config 00:02:24.840 net/dpaa: not in enabled drivers build config 00:02:24.840 net/dpaa2: not in enabled drivers build config 00:02:24.840 net/e1000: not in enabled drivers build config 00:02:24.840 net/ena: not in enabled drivers build config 00:02:24.840 net/enetc: not in enabled drivers build config 00:02:24.840 net/enetfec: not in enabled drivers build config 00:02:24.840 net/enic: not in enabled drivers build config 00:02:24.840 net/failsafe: not in enabled drivers build config 00:02:24.840 net/fm10k: not in enabled drivers build config 00:02:24.840 net/gve: not in enabled drivers build config 00:02:24.840 net/hinic: not in enabled drivers build config 00:02:24.840 net/hns3: not in enabled drivers build config 00:02:24.840 net/i40e: not in enabled drivers build config 00:02:24.840 net/iavf: not in enabled drivers build config 00:02:24.840 net/ice: not in enabled drivers build config 00:02:24.840 net/idpf: not in enabled drivers build config 00:02:24.840 net/igc: not in enabled drivers build config 00:02:24.840 net/ionic: not in enabled drivers build config 00:02:24.840 net/ipn3ke: not in enabled drivers build config 00:02:24.840 net/ixgbe: not in enabled drivers build config 00:02:24.840 net/mana: not in enabled drivers build config 00:02:24.840 net/memif: not in enabled drivers build config 00:02:24.840 net/mlx4: not in enabled drivers build config 00:02:24.840 net/mlx5: not in enabled drivers build config 00:02:24.840 net/mvneta: not in enabled drivers build config 00:02:24.840 net/mvpp2: not in enabled drivers build config 00:02:24.840 net/netvsc: not in enabled drivers build config 00:02:24.840 net/nfb: not in enabled drivers build config 00:02:24.840 net/nfp: not in enabled drivers build config 00:02:24.840 net/ngbe: not in enabled drivers build config 00:02:24.840 net/null: not in enabled drivers build config 00:02:24.840 net/octeontx: not in enabled drivers build config 00:02:24.840 net/octeon_ep: not in enabled drivers build config 00:02:24.840 net/pcap: not in enabled drivers build config 00:02:24.840 net/pfe: not in enabled drivers build config 00:02:24.840 net/qede: not in enabled drivers build config 00:02:24.840 net/ring: not in enabled drivers build config 00:02:24.840 net/sfc: not in enabled drivers build config 00:02:24.840 net/softnic: not in enabled drivers build config 00:02:24.840 net/tap: not in enabled drivers build config 00:02:24.840 net/thunderx: not in enabled drivers build config 00:02:24.840 net/txgbe: not in enabled drivers build config 00:02:24.840 net/vdev_netvsc: not in enabled drivers build config 00:02:24.840 net/vhost: not in enabled drivers build config 00:02:24.840 net/virtio: not in enabled drivers build config 00:02:24.840 net/vmxnet3: not in enabled drivers build config 00:02:24.840 raw/*: missing internal dependency, "rawdev" 00:02:24.840 crypto/armv8: not in enabled drivers build config 00:02:24.840 crypto/bcmfs: not in enabled drivers build config 00:02:24.840 crypto/caam_jr: not in enabled drivers build config 00:02:24.840 crypto/ccp: not in enabled drivers build config 00:02:24.840 crypto/cnxk: not in enabled drivers build config 00:02:24.840 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.840 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.840 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.840 crypto/mlx5: not in enabled drivers build config 00:02:24.840 crypto/mvsam: not in enabled drivers build config 00:02:24.840 crypto/nitrox: not in enabled drivers build config 00:02:24.840 crypto/null: not in enabled drivers build config 00:02:24.840 crypto/octeontx: not in enabled drivers build config 00:02:24.840 crypto/openssl: not in enabled drivers build config 00:02:24.840 crypto/scheduler: not in enabled drivers build config 00:02:24.840 crypto/uadk: not in enabled drivers build config 00:02:24.840 crypto/virtio: not in enabled drivers build config 00:02:24.840 compress/isal: not in enabled drivers build config 00:02:24.840 compress/mlx5: not in enabled drivers build config 00:02:24.840 compress/nitrox: not in enabled drivers build config 00:02:24.840 compress/octeontx: not in enabled drivers build config 00:02:24.840 compress/zlib: not in enabled drivers build config 00:02:24.840 regex/*: missing internal dependency, "regexdev" 00:02:24.840 ml/*: missing internal dependency, "mldev" 00:02:24.840 vdpa/ifc: not in enabled drivers build config 00:02:24.840 vdpa/mlx5: not in enabled drivers build config 00:02:24.840 vdpa/nfp: not in enabled drivers build config 00:02:24.840 vdpa/sfc: not in enabled drivers build config 00:02:24.840 event/*: missing internal dependency, "eventdev" 00:02:24.840 baseband/*: missing internal dependency, "bbdev" 00:02:24.840 gpu/*: missing internal dependency, "gpudev" 00:02:24.840 00:02:24.840 00:02:25.142 Build targets in project: 85 00:02:25.142 00:02:25.142 DPDK 24.03.0 00:02:25.142 00:02:25.142 User defined options 00:02:25.142 buildtype : debug 00:02:25.142 default_library : static 00:02:25.142 libdir : lib 00:02:25.142 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:25.142 c_args : -fPIC -Werror 00:02:25.142 c_link_args : 00:02:25.142 cpu_instruction_set: native 00:02:25.142 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:25.142 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:25.142 enable_docs : false 00:02:25.142 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:25.142 enable_kmods : false 00:02:25.142 max_lcores : 128 00:02:25.142 tests : false 00:02:25.142 00:02:25.142 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.426 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:25.689 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.689 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.689 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.689 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.689 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.689 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.689 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.689 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.689 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.689 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:25.689 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.689 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.689 [13/268] Linking static target lib/librte_kvargs.a 00:02:25.689 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.689 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.689 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.689 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.689 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.689 [19/268] Linking static target lib/librte_log.a 00:02:26.260 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.260 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.260 [22/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.260 [23/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.260 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.260 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.260 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.260 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.260 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.260 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.260 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:26.260 [31/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:26.260 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.260 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.260 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.260 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.260 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.260 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.260 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.260 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.260 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:26.260 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.260 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.260 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.260 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.260 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.260 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.260 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.260 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.260 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.260 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.260 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.260 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.260 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.260 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:26.260 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.260 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.260 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:26.260 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.260 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.260 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:26.260 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.260 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:26.260 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.260 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.260 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.260 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.260 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.260 [68/268] Linking static target lib/librte_telemetry.a 00:02:26.520 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.520 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.520 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.520 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.520 [73/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:26.520 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.520 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.520 [76/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:26.520 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.520 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:26.520 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.520 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.520 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.520 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.520 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.520 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.520 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:26.520 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.520 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:26.520 [88/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:26.520 [89/268] Linking static target lib/librte_pci.a 00:02:26.520 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:26.520 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.520 [92/268] Linking static target lib/librte_ring.a 00:02:26.520 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:26.520 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:26.521 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:26.521 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:26.521 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:26.521 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:26.521 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:26.521 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.521 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.521 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:26.521 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.521 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.521 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.521 [106/268] Linking static target lib/librte_mempool.a 00:02:26.521 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.521 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:26.521 [109/268] Linking static target lib/librte_eal.a 00:02:26.521 [110/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.521 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.521 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:26.521 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.521 [114/268] Linking static target lib/librte_rcu.a 00:02:26.521 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.521 [116/268] Linking target lib/librte_log.so.24.1 00:02:26.780 [117/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:26.780 [118/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:26.780 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.780 [120/268] Linking static target lib/librte_meter.a 00:02:26.780 [121/268] Linking static target lib/librte_mbuf.a 00:02:26.780 [122/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:26.780 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.780 [124/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.780 [125/268] Linking static target lib/librte_net.a 00:02:26.780 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:27.040 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.040 [128/268] Linking target lib/librte_kvargs.so.24.1 00:02:27.040 [129/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:27.040 [130/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.040 [131/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.040 [132/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.040 [133/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.040 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.040 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.040 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.040 [137/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.040 [138/268] Linking target lib/librte_telemetry.so.24.1 00:02:27.040 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.040 [140/268] Linking static target lib/librte_timer.a 00:02:27.040 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.040 [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.040 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.040 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.040 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:27.040 [146/268] Linking static target lib/librte_cmdline.a 00:02:27.040 [147/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:27.040 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.041 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.041 [150/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.041 [151/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.041 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.041 [153/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:27.041 [154/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.041 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.041 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.041 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.299 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:27.299 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.299 [160/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.299 [161/268] Linking static target lib/librte_dmadev.a 00:02:27.299 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.299 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.299 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.299 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:27.299 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.299 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.299 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.299 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.299 [170/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:27.299 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.299 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.299 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:27.299 [174/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.299 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.299 [176/268] Linking static target lib/librte_compressdev.a 00:02:27.299 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.299 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.299 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.299 [180/268] Linking static target lib/librte_hash.a 00:02:27.299 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.299 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:27.299 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.299 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.299 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:27.299 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.299 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:27.299 [188/268] Linking static target lib/librte_power.a 00:02:27.299 [189/268] Linking static target lib/librte_reorder.a 00:02:27.299 [190/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.299 [191/268] Linking static target lib/librte_security.a 00:02:27.299 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.299 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.299 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.299 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.299 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.558 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.558 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.558 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.558 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.558 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.558 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:27.558 [203/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:27.558 [204/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.558 [205/268] Linking static target lib/librte_cryptodev.a 00:02:27.558 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.558 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.558 [208/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.558 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.558 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.558 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.558 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.558 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.558 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.558 [215/268] Linking static target drivers/librte_mempool_ring.a 00:02:27.817 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:27.817 [217/268] Linking static target lib/librte_ethdev.a 00:02:27.817 [218/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:27.817 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.817 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.817 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.074 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.074 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.332 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.332 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.590 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.590 [227/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.590 [228/268] Linking static target lib/librte_vhost.a 00:02:28.590 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.975 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.908 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.019 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.276 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.535 [234/268] Linking target lib/librte_eal.so.24.1 00:02:39.535 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:39.793 [236/268] Linking target lib/librte_timer.so.24.1 00:02:39.793 [237/268] Linking target lib/librte_meter.so.24.1 00:02:39.793 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:39.793 [239/268] Linking target lib/librte_ring.so.24.1 00:02:39.793 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:39.793 [241/268] Linking target lib/librte_pci.so.24.1 00:02:39.793 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:39.793 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:39.793 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:39.793 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:39.793 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:40.051 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:40.051 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:40.051 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.051 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:40.051 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.309 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:40.309 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:40.309 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:40.567 [255/268] Linking target lib/librte_net.so.24.1 00:02:40.567 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:40.567 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:40.567 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:40.567 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:40.567 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:40.826 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:40.826 [262/268] Linking target lib/librte_hash.so.24.1 00:02:40.826 [263/268] Linking target lib/librte_security.so.24.1 00:02:40.826 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:40.826 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.826 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:41.084 [267/268] Linking target lib/librte_power.so.24.1 00:02:41.084 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:41.084 INFO: autodetecting backend as ninja 00:02:41.084 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:42.461 CC lib/ut/ut.o 00:02:42.461 CC lib/log/log.o 00:02:42.461 CC lib/log/log_flags.o 00:02:42.461 CC lib/log/log_deprecated.o 00:02:42.461 CC lib/ut_mock/mock.o 00:02:42.461 LIB libspdk_ut.a 00:02:42.461 LIB libspdk_log.a 00:02:42.461 LIB libspdk_ut_mock.a 00:02:42.720 CC lib/dma/dma.o 00:02:42.720 CC lib/util/base64.o 00:02:42.720 CC lib/util/bit_array.o 00:02:42.720 CC lib/util/cpuset.o 00:02:42.720 CC lib/util/crc16.o 00:02:42.720 CC lib/util/crc32c.o 00:02:42.720 CC lib/util/crc32.o 00:02:42.720 CC lib/util/crc32_ieee.o 00:02:42.720 CC lib/util/crc64.o 00:02:42.720 CC lib/util/fd.o 00:02:42.720 CC lib/util/dif.o 00:02:42.720 CC lib/util/fd_group.o 00:02:42.720 CC lib/util/file.o 00:02:42.720 CC lib/util/hexlify.o 00:02:42.720 CC lib/util/math.o 00:02:42.720 CC lib/util/iov.o 00:02:42.720 CC lib/util/strerror_tls.o 00:02:42.720 CXX lib/trace_parser/trace.o 00:02:42.720 CC lib/util/net.o 00:02:42.720 CC lib/util/pipe.o 00:02:42.720 CC lib/ioat/ioat.o 00:02:42.720 CC lib/util/uuid.o 00:02:42.720 CC lib/util/string.o 00:02:42.720 CC lib/util/xor.o 00:02:42.720 CC lib/util/zipf.o 00:02:42.720 CC lib/vfio_user/host/vfio_user.o 00:02:42.720 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.720 LIB libspdk_dma.a 00:02:42.979 LIB libspdk_ioat.a 00:02:42.979 LIB libspdk_util.a 00:02:42.979 LIB libspdk_vfio_user.a 00:02:43.238 CC lib/vmd/vmd.o 00:02:43.238 CC lib/vmd/led.o 00:02:43.238 CC lib/conf/conf.o 00:02:43.238 CC lib/env_dpdk/env.o 00:02:43.238 CC lib/env_dpdk/memory.o 00:02:43.238 CC lib/env_dpdk/pci.o 00:02:43.238 CC lib/env_dpdk/init.o 00:02:43.238 CC lib/env_dpdk/threads.o 00:02:43.238 CC lib/env_dpdk/pci_ioat.o 00:02:43.238 CC lib/env_dpdk/pci_virtio.o 00:02:43.238 CC lib/env_dpdk/pci_vmd.o 00:02:43.238 CC lib/env_dpdk/pci_idxd.o 00:02:43.238 CC lib/env_dpdk/sigbus_handler.o 00:02:43.238 CC lib/env_dpdk/pci_event.o 00:02:43.238 CC lib/env_dpdk/pci_dpdk.o 00:02:43.238 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.238 CC lib/json/json_parse.o 00:02:43.238 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.238 CC lib/idxd/idxd_user.o 00:02:43.238 CC lib/json/json_util.o 00:02:43.238 CC lib/idxd/idxd.o 00:02:43.238 CC lib/json/json_write.o 00:02:43.238 CC lib/idxd/idxd_kernel.o 00:02:43.238 CC lib/rdma_utils/rdma_utils.o 00:02:43.238 CC lib/rdma_provider/common.o 00:02:43.238 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:43.238 LIB libspdk_trace_parser.a 00:02:43.497 LIB libspdk_rdma_provider.a 00:02:43.497 LIB libspdk_conf.a 00:02:43.497 LIB libspdk_json.a 00:02:43.497 LIB libspdk_rdma_utils.a 00:02:43.755 LIB libspdk_idxd.a 00:02:43.755 LIB libspdk_vmd.a 00:02:44.013 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.013 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.013 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.013 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.272 LIB libspdk_jsonrpc.a 00:02:44.531 CC lib/rpc/rpc.o 00:02:44.531 LIB libspdk_env_dpdk.a 00:02:44.789 LIB libspdk_rpc.a 00:02:45.047 CC lib/trace/trace.o 00:02:45.047 CC lib/trace/trace_flags.o 00:02:45.047 CC lib/trace/trace_rpc.o 00:02:45.047 CC lib/notify/notify_rpc.o 00:02:45.047 CC lib/notify/notify.o 00:02:45.047 CC lib/keyring/keyring.o 00:02:45.047 CC lib/keyring/keyring_rpc.o 00:02:45.047 LIB libspdk_notify.a 00:02:45.305 LIB libspdk_keyring.a 00:02:45.305 LIB libspdk_trace.a 00:02:45.563 CC lib/thread/thread.o 00:02:45.563 CC lib/thread/iobuf.o 00:02:45.563 CC lib/sock/sock.o 00:02:45.563 CC lib/sock/sock_rpc.o 00:02:45.821 LIB libspdk_sock.a 00:02:46.386 CC lib/nvme/nvme_ctrlr.o 00:02:46.386 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.386 CC lib/nvme/nvme_fabric.o 00:02:46.386 CC lib/nvme/nvme_ns_cmd.o 00:02:46.386 CC lib/nvme/nvme_ns.o 00:02:46.386 CC lib/nvme/nvme_pcie_common.o 00:02:46.386 CC lib/nvme/nvme_pcie.o 00:02:46.386 CC lib/nvme/nvme_qpair.o 00:02:46.386 CC lib/nvme/nvme.o 00:02:46.386 CC lib/nvme/nvme_quirks.o 00:02:46.386 CC lib/nvme/nvme_transport.o 00:02:46.386 CC lib/nvme/nvme_discovery.o 00:02:46.386 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.386 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.386 CC lib/nvme/nvme_tcp.o 00:02:46.386 CC lib/nvme/nvme_opal.o 00:02:46.386 CC lib/nvme/nvme_io_msg.o 00:02:46.386 CC lib/nvme/nvme_poll_group.o 00:02:46.386 CC lib/nvme/nvme_zns.o 00:02:46.386 CC lib/nvme/nvme_stubs.o 00:02:46.386 CC lib/nvme/nvme_auth.o 00:02:46.386 CC lib/nvme/nvme_cuse.o 00:02:46.386 CC lib/nvme/nvme_vfio_user.o 00:02:46.386 CC lib/nvme/nvme_rdma.o 00:02:46.694 LIB libspdk_thread.a 00:02:46.974 CC lib/blob/blobstore.o 00:02:46.974 CC lib/blob/zeroes.o 00:02:46.974 CC lib/blob/request.o 00:02:46.974 CC lib/blob/blob_bs_dev.o 00:02:46.974 CC lib/accel/accel.o 00:02:46.974 CC lib/accel/accel_rpc.o 00:02:46.974 CC lib/accel/accel_sw.o 00:02:46.974 CC lib/virtio/virtio.o 00:02:46.974 CC lib/virtio/virtio_vhost_user.o 00:02:46.974 CC lib/virtio/virtio_vfio_user.o 00:02:46.974 CC lib/virtio/virtio_pci.o 00:02:46.974 CC lib/init/json_config.o 00:02:46.974 CC lib/init/subsystem_rpc.o 00:02:46.974 CC lib/init/subsystem.o 00:02:46.974 CC lib/init/rpc.o 00:02:46.974 CC lib/vfu_tgt/tgt_endpoint.o 00:02:46.974 CC lib/vfu_tgt/tgt_rpc.o 00:02:47.241 LIB libspdk_init.a 00:02:47.242 LIB libspdk_virtio.a 00:02:47.242 LIB libspdk_vfu_tgt.a 00:02:47.499 CC lib/event/app.o 00:02:47.499 CC lib/event/reactor.o 00:02:47.499 CC lib/event/scheduler_static.o 00:02:47.499 CC lib/event/log_rpc.o 00:02:47.499 CC lib/event/app_rpc.o 00:02:48.064 LIB libspdk_event.a 00:02:48.064 LIB libspdk_accel.a 00:02:48.064 LIB libspdk_nvme.a 00:02:48.322 CC lib/bdev/bdev.o 00:02:48.322 CC lib/bdev/bdev_zone.o 00:02:48.322 CC lib/bdev/bdev_rpc.o 00:02:48.322 CC lib/bdev/part.o 00:02:48.322 CC lib/bdev/scsi_nvme.o 00:02:49.698 LIB libspdk_blob.a 00:02:49.698 CC lib/blobfs/blobfs.o 00:02:49.698 CC lib/blobfs/tree.o 00:02:49.698 CC lib/lvol/lvol.o 00:02:50.654 LIB libspdk_lvol.a 00:02:50.654 LIB libspdk_blobfs.a 00:02:50.654 LIB libspdk_bdev.a 00:02:51.230 CC lib/nvmf/ctrlr.o 00:02:51.230 CC lib/nvmf/ctrlr_discovery.o 00:02:51.230 CC lib/nvmf/ctrlr_bdev.o 00:02:51.230 CC lib/nvmf/nvmf.o 00:02:51.230 CC lib/nvmf/subsystem.o 00:02:51.230 CC lib/nvmf/nvmf_rpc.o 00:02:51.230 CC lib/nvmf/transport.o 00:02:51.230 CC lib/nvmf/tcp.o 00:02:51.230 CC lib/ublk/ublk.o 00:02:51.230 CC lib/nvmf/stubs.o 00:02:51.230 CC lib/scsi/dev.o 00:02:51.230 CC lib/ublk/ublk_rpc.o 00:02:51.230 CC lib/nvmf/mdns_server.o 00:02:51.230 CC lib/scsi/lun.o 00:02:51.230 CC lib/nvmf/vfio_user.o 00:02:51.230 CC lib/scsi/port.o 00:02:51.230 CC lib/nvmf/rdma.o 00:02:51.230 CC lib/scsi/scsi.o 00:02:51.230 CC lib/nvmf/auth.o 00:02:51.230 CC lib/scsi/scsi_bdev.o 00:02:51.230 CC lib/scsi/scsi_pr.o 00:02:51.230 CC lib/scsi/task.o 00:02:51.230 CC lib/scsi/scsi_rpc.o 00:02:51.230 CC lib/ftl/ftl_core.o 00:02:51.230 CC lib/nbd/nbd.o 00:02:51.230 CC lib/nbd/nbd_rpc.o 00:02:51.230 CC lib/ftl/ftl_init.o 00:02:51.230 CC lib/ftl/ftl_debug.o 00:02:51.230 CC lib/ftl/ftl_layout.o 00:02:51.230 CC lib/ftl/ftl_io.o 00:02:51.230 CC lib/ftl/ftl_sb.o 00:02:51.230 CC lib/ftl/ftl_l2p.o 00:02:51.230 CC lib/ftl/ftl_l2p_flat.o 00:02:51.230 CC lib/ftl/ftl_nv_cache.o 00:02:51.230 CC lib/ftl/ftl_band.o 00:02:51.230 CC lib/ftl/ftl_band_ops.o 00:02:51.230 CC lib/ftl/ftl_writer.o 00:02:51.230 CC lib/ftl/ftl_rq.o 00:02:51.230 CC lib/ftl/ftl_l2p_cache.o 00:02:51.230 CC lib/ftl/ftl_p2l.o 00:02:51.230 CC lib/ftl/ftl_reloc.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.230 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.230 CC lib/ftl/utils/ftl_conf.o 00:02:51.230 CC lib/ftl/utils/ftl_md.o 00:02:51.230 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.230 CC lib/ftl/utils/ftl_mempool.o 00:02:51.230 CC lib/ftl/utils/ftl_property.o 00:02:51.230 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.230 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.230 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.230 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.230 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.230 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.230 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:51.230 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.230 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.230 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.230 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.230 CC lib/ftl/base/ftl_base_dev.o 00:02:51.230 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.491 CC lib/ftl/ftl_trace.o 00:02:51.751 LIB libspdk_nbd.a 00:02:51.751 LIB libspdk_scsi.a 00:02:51.751 LIB libspdk_ublk.a 00:02:52.009 CC lib/vhost/vhost.o 00:02:52.009 CC lib/vhost/vhost_rpc.o 00:02:52.009 CC lib/vhost/vhost_scsi.o 00:02:52.009 CC lib/vhost/vhost_blk.o 00:02:52.009 CC lib/vhost/rte_vhost_user.o 00:02:52.009 CC lib/iscsi/conn.o 00:02:52.009 CC lib/iscsi/init_grp.o 00:02:52.009 CC lib/iscsi/iscsi.o 00:02:52.009 CC lib/iscsi/md5.o 00:02:52.009 CC lib/iscsi/param.o 00:02:52.009 CC lib/iscsi/tgt_node.o 00:02:52.009 CC lib/iscsi/portal_grp.o 00:02:52.009 CC lib/iscsi/iscsi_subsystem.o 00:02:52.009 CC lib/iscsi/iscsi_rpc.o 00:02:52.009 CC lib/iscsi/task.o 00:02:52.267 LIB libspdk_ftl.a 00:02:52.525 LIB libspdk_nvmf.a 00:02:52.783 LIB libspdk_iscsi.a 00:02:53.041 LIB libspdk_vhost.a 00:02:53.299 CC module/env_dpdk/env_dpdk_rpc.o 00:02:53.299 CC module/vfu_device/vfu_virtio.o 00:02:53.299 CC module/vfu_device/vfu_virtio_blk.o 00:02:53.299 CC module/vfu_device/vfu_virtio_scsi.o 00:02:53.299 CC module/vfu_device/vfu_virtio_rpc.o 00:02:53.558 CC module/blob/bdev/blob_bdev.o 00:02:53.558 CC module/accel/ioat/accel_ioat.o 00:02:53.558 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.558 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.558 LIB libspdk_env_dpdk_rpc.a 00:02:53.558 CC module/keyring/file/keyring.o 00:02:53.558 CC module/accel/dsa/accel_dsa.o 00:02:53.558 CC module/keyring/file/keyring_rpc.o 00:02:53.558 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.558 CC module/accel/error/accel_error.o 00:02:53.558 CC module/accel/error/accel_error_rpc.o 00:02:53.558 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.558 CC module/sock/posix/posix.o 00:02:53.558 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:53.558 CC module/keyring/linux/keyring.o 00:02:53.558 CC module/keyring/linux/keyring_rpc.o 00:02:53.558 CC module/accel/iaa/accel_iaa.o 00:02:53.558 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.558 LIB libspdk_scheduler_gscheduler.a 00:02:53.558 LIB libspdk_accel_ioat.a 00:02:53.558 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.558 LIB libspdk_keyring_file.a 00:02:53.558 LIB libspdk_keyring_linux.a 00:02:53.558 LIB libspdk_blob_bdev.a 00:02:53.817 LIB libspdk_accel_error.a 00:02:53.817 LIB libspdk_scheduler_dynamic.a 00:02:53.817 LIB libspdk_accel_iaa.a 00:02:53.817 LIB libspdk_accel_dsa.a 00:02:54.076 LIB libspdk_vfu_device.a 00:02:54.076 CC module/bdev/error/vbdev_error.o 00:02:54.076 CC module/bdev/error/vbdev_error_rpc.o 00:02:54.076 CC module/bdev/malloc/bdev_malloc.o 00:02:54.076 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.076 CC module/bdev/gpt/gpt.o 00:02:54.076 CC module/bdev/null/bdev_null.o 00:02:54.076 CC module/bdev/gpt/vbdev_gpt.o 00:02:54.076 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.076 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.076 CC module/bdev/null/bdev_null_rpc.o 00:02:54.076 CC module/bdev/nvme/bdev_nvme.o 00:02:54.076 CC module/bdev/nvme/nvme_rpc.o 00:02:54.076 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.076 CC module/bdev/nvme/bdev_mdns_client.o 00:02:54.076 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:54.076 CC module/bdev/nvme/vbdev_opal.o 00:02:54.076 CC module/bdev/aio/bdev_aio.o 00:02:54.076 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.076 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.076 CC module/bdev/aio/bdev_aio_rpc.o 00:02:54.076 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.076 CC module/bdev/raid/bdev_raid.o 00:02:54.076 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:54.076 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.076 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.076 CC module/bdev/raid/raid0.o 00:02:54.076 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.076 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.076 CC module/bdev/split/vbdev_split.o 00:02:54.076 CC module/bdev/delay/vbdev_delay.o 00:02:54.076 CC module/bdev/raid/raid1.o 00:02:54.076 CC module/bdev/ftl/bdev_ftl.o 00:02:54.076 CC module/bdev/raid/concat.o 00:02:54.076 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.076 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.076 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.076 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.076 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.076 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:54.076 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:54.076 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.076 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:54.076 LIB libspdk_sock_posix.a 00:02:54.333 LIB libspdk_blobfs_bdev.a 00:02:54.333 LIB libspdk_bdev_split.a 00:02:54.333 LIB libspdk_bdev_error.a 00:02:54.333 LIB libspdk_bdev_gpt.a 00:02:54.333 LIB libspdk_bdev_aio.a 00:02:54.333 LIB libspdk_bdev_null.a 00:02:54.333 LIB libspdk_bdev_ftl.a 00:02:54.333 LIB libspdk_bdev_zone_block.a 00:02:54.591 LIB libspdk_bdev_malloc.a 00:02:54.591 LIB libspdk_bdev_delay.a 00:02:54.591 LIB libspdk_bdev_iscsi.a 00:02:54.591 LIB libspdk_bdev_passthru.a 00:02:54.591 LIB libspdk_bdev_lvol.a 00:02:54.591 LIB libspdk_bdev_virtio.a 00:02:55.158 LIB libspdk_bdev_raid.a 00:02:55.416 LIB libspdk_bdev_nvme.a 00:02:55.982 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.982 CC module/event/subsystems/vmd/vmd.o 00:02:55.982 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.982 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.982 CC module/event/subsystems/sock/sock.o 00:02:56.241 CC module/event/subsystems/iobuf/iobuf.o 00:02:56.241 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:56.241 CC module/event/subsystems/scheduler/scheduler.o 00:02:56.241 CC module/event/subsystems/keyring/keyring.o 00:02:56.241 LIB libspdk_event_keyring.a 00:02:56.241 LIB libspdk_event_vfu_tgt.a 00:02:56.241 LIB libspdk_event_vmd.a 00:02:56.241 LIB libspdk_event_vhost_blk.a 00:02:56.241 LIB libspdk_event_scheduler.a 00:02:56.241 LIB libspdk_event_sock.a 00:02:56.241 LIB libspdk_event_iobuf.a 00:02:56.499 CC module/event/subsystems/accel/accel.o 00:02:56.758 LIB libspdk_event_accel.a 00:02:57.016 CC module/event/subsystems/bdev/bdev.o 00:02:57.275 LIB libspdk_event_bdev.a 00:02:57.533 CC module/event/subsystems/nbd/nbd.o 00:02:57.533 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.533 CC module/event/subsystems/ublk/ublk.o 00:02:57.533 CC module/event/subsystems/scsi/scsi.o 00:02:57.533 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.791 LIB libspdk_event_nbd.a 00:02:57.791 LIB libspdk_event_ublk.a 00:02:57.791 LIB libspdk_event_scsi.a 00:02:57.791 LIB libspdk_event_nvmf.a 00:02:58.050 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.050 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.307 LIB libspdk_event_vhost_scsi.a 00:02:58.307 LIB libspdk_event_iscsi.a 00:02:58.573 TEST_HEADER include/spdk/accel.h 00:02:58.573 TEST_HEADER include/spdk/accel_module.h 00:02:58.573 TEST_HEADER include/spdk/assert.h 00:02:58.573 TEST_HEADER include/spdk/barrier.h 00:02:58.573 TEST_HEADER include/spdk/base64.h 00:02:58.573 TEST_HEADER include/spdk/bdev_module.h 00:02:58.573 TEST_HEADER include/spdk/bdev.h 00:02:58.573 TEST_HEADER include/spdk/bit_pool.h 00:02:58.573 TEST_HEADER include/spdk/bit_array.h 00:02:58.573 TEST_HEADER include/spdk/bdev_zone.h 00:02:58.573 TEST_HEADER include/spdk/blob_bdev.h 00:02:58.573 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:58.573 CC test/rpc_client/rpc_client_test.o 00:02:58.573 TEST_HEADER include/spdk/blob.h 00:02:58.573 TEST_HEADER include/spdk/blobfs.h 00:02:58.573 TEST_HEADER include/spdk/conf.h 00:02:58.573 TEST_HEADER include/spdk/config.h 00:02:58.573 TEST_HEADER include/spdk/cpuset.h 00:02:58.573 TEST_HEADER include/spdk/crc32.h 00:02:58.573 TEST_HEADER include/spdk/crc16.h 00:02:58.573 TEST_HEADER include/spdk/crc64.h 00:02:58.573 TEST_HEADER include/spdk/dif.h 00:02:58.573 TEST_HEADER include/spdk/dma.h 00:02:58.573 TEST_HEADER include/spdk/endian.h 00:02:58.573 CC app/trace_record/trace_record.o 00:02:58.573 TEST_HEADER include/spdk/env.h 00:02:58.573 TEST_HEADER include/spdk/env_dpdk.h 00:02:58.573 CXX app/trace/trace.o 00:02:58.573 TEST_HEADER include/spdk/event.h 00:02:58.573 CC app/spdk_top/spdk_top.o 00:02:58.573 TEST_HEADER include/spdk/fd_group.h 00:02:58.573 CC app/spdk_lspci/spdk_lspci.o 00:02:58.573 TEST_HEADER include/spdk/fd.h 00:02:58.573 TEST_HEADER include/spdk/file.h 00:02:58.573 CC app/spdk_nvme_identify/identify.o 00:02:58.573 TEST_HEADER include/spdk/ftl.h 00:02:58.573 CC app/spdk_nvme_perf/perf.o 00:02:58.573 TEST_HEADER include/spdk/gpt_spec.h 00:02:58.573 TEST_HEADER include/spdk/hexlify.h 00:02:58.573 TEST_HEADER include/spdk/histogram_data.h 00:02:58.573 TEST_HEADER include/spdk/idxd.h 00:02:58.573 TEST_HEADER include/spdk/init.h 00:02:58.573 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.573 TEST_HEADER include/spdk/idxd_spec.h 00:02:58.573 TEST_HEADER include/spdk/ioat.h 00:02:58.573 TEST_HEADER include/spdk/ioat_spec.h 00:02:58.573 TEST_HEADER include/spdk/iscsi_spec.h 00:02:58.573 TEST_HEADER include/spdk/json.h 00:02:58.573 TEST_HEADER include/spdk/jsonrpc.h 00:02:58.573 TEST_HEADER include/spdk/keyring.h 00:02:58.573 TEST_HEADER include/spdk/keyring_module.h 00:02:58.573 TEST_HEADER include/spdk/likely.h 00:02:58.573 TEST_HEADER include/spdk/log.h 00:02:58.573 TEST_HEADER include/spdk/lvol.h 00:02:58.573 TEST_HEADER include/spdk/mmio.h 00:02:58.573 TEST_HEADER include/spdk/nbd.h 00:02:58.573 TEST_HEADER include/spdk/memory.h 00:02:58.573 TEST_HEADER include/spdk/net.h 00:02:58.573 TEST_HEADER include/spdk/notify.h 00:02:58.573 TEST_HEADER include/spdk/nvme_intel.h 00:02:58.573 TEST_HEADER include/spdk/nvme.h 00:02:58.573 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:58.573 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:58.573 TEST_HEADER include/spdk/nvme_spec.h 00:02:58.573 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.573 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.573 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.573 TEST_HEADER include/spdk/nvmf.h 00:02:58.573 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.573 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.573 TEST_HEADER include/spdk/opal.h 00:02:58.573 TEST_HEADER include/spdk/opal_spec.h 00:02:58.573 TEST_HEADER include/spdk/pci_ids.h 00:02:58.573 TEST_HEADER include/spdk/pipe.h 00:02:58.573 TEST_HEADER include/spdk/queue.h 00:02:58.573 TEST_HEADER include/spdk/reduce.h 00:02:58.573 TEST_HEADER include/spdk/rpc.h 00:02:58.573 TEST_HEADER include/spdk/scheduler.h 00:02:58.573 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.573 TEST_HEADER include/spdk/scsi.h 00:02:58.573 TEST_HEADER include/spdk/sock.h 00:02:58.573 TEST_HEADER include/spdk/stdinc.h 00:02:58.573 TEST_HEADER include/spdk/string.h 00:02:58.573 TEST_HEADER include/spdk/thread.h 00:02:58.573 TEST_HEADER include/spdk/trace.h 00:02:58.573 TEST_HEADER include/spdk/trace_parser.h 00:02:58.573 TEST_HEADER include/spdk/tree.h 00:02:58.573 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.573 TEST_HEADER include/spdk/ublk.h 00:02:58.573 TEST_HEADER include/spdk/util.h 00:02:58.573 TEST_HEADER include/spdk/uuid.h 00:02:58.573 TEST_HEADER include/spdk/version.h 00:02:58.573 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.573 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.573 TEST_HEADER include/spdk/vhost.h 00:02:58.573 TEST_HEADER include/spdk/vmd.h 00:02:58.573 TEST_HEADER include/spdk/zipf.h 00:02:58.573 TEST_HEADER include/spdk/xor.h 00:02:58.573 CXX test/cpp_headers/accel.o 00:02:58.573 CXX test/cpp_headers/accel_module.o 00:02:58.573 CXX test/cpp_headers/assert.o 00:02:58.573 CXX test/cpp_headers/barrier.o 00:02:58.573 CXX test/cpp_headers/base64.o 00:02:58.573 CXX test/cpp_headers/bdev.o 00:02:58.573 CC app/spdk_dd/spdk_dd.o 00:02:58.573 CXX test/cpp_headers/bdev_module.o 00:02:58.573 CXX test/cpp_headers/bdev_zone.o 00:02:58.573 CXX test/cpp_headers/bit_array.o 00:02:58.573 CXX test/cpp_headers/bit_pool.o 00:02:58.573 CXX test/cpp_headers/blob_bdev.o 00:02:58.573 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.573 CXX test/cpp_headers/blobfs.o 00:02:58.573 CXX test/cpp_headers/blob.o 00:02:58.573 CXX test/cpp_headers/conf.o 00:02:58.573 CXX test/cpp_headers/cpuset.o 00:02:58.573 CXX test/cpp_headers/config.o 00:02:58.573 CXX test/cpp_headers/crc16.o 00:02:58.573 CXX test/cpp_headers/crc32.o 00:02:58.573 CXX test/cpp_headers/dif.o 00:02:58.573 CXX test/cpp_headers/crc64.o 00:02:58.573 CXX test/cpp_headers/dma.o 00:02:58.573 CXX test/cpp_headers/env_dpdk.o 00:02:58.573 CXX test/cpp_headers/endian.o 00:02:58.573 CC app/nvmf_tgt/nvmf_main.o 00:02:58.573 CXX test/cpp_headers/env.o 00:02:58.573 CXX test/cpp_headers/event.o 00:02:58.573 CXX test/cpp_headers/fd_group.o 00:02:58.573 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.573 CXX test/cpp_headers/fd.o 00:02:58.573 CXX test/cpp_headers/file.o 00:02:58.573 CXX test/cpp_headers/ftl.o 00:02:58.573 CXX test/cpp_headers/gpt_spec.o 00:02:58.573 CXX test/cpp_headers/hexlify.o 00:02:58.573 CXX test/cpp_headers/histogram_data.o 00:02:58.573 CXX test/cpp_headers/idxd.o 00:02:58.573 CXX test/cpp_headers/idxd_spec.o 00:02:58.573 CXX test/cpp_headers/init.o 00:02:58.573 CXX test/cpp_headers/ioat.o 00:02:58.573 CXX test/cpp_headers/ioat_spec.o 00:02:58.573 CXX test/cpp_headers/iscsi_spec.o 00:02:58.573 CXX test/cpp_headers/json.o 00:02:58.573 CXX test/cpp_headers/jsonrpc.o 00:02:58.573 CC test/app/histogram_perf/histogram_perf.o 00:02:58.573 CC test/env/memory/memory_ut.o 00:02:58.573 CC test/app/jsoncat/jsoncat.o 00:02:58.573 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.573 CC test/env/pci/pci_ut.o 00:02:58.573 CC test/env/vtophys/vtophys.o 00:02:58.573 CC test/app/stub/stub.o 00:02:58.573 CC test/thread/poller_perf/poller_perf.o 00:02:58.573 CC test/thread/lock/spdk_lock.o 00:02:58.835 CC examples/util/zipf/zipf.o 00:02:58.835 CXX test/cpp_headers/keyring.o 00:02:58.835 CC examples/ioat/verify/verify.o 00:02:58.835 CC examples/ioat/perf/perf.o 00:02:58.835 CC app/spdk_tgt/spdk_tgt.o 00:02:58.835 CC app/fio/nvme/fio_plugin.o 00:02:58.835 CC test/app/bdev_svc/bdev_svc.o 00:02:58.835 CC test/dma/test_dma/test_dma.o 00:02:58.835 LINK spdk_lspci 00:02:58.835 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.835 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.835 CC app/fio/bdev/fio_plugin.o 00:02:58.835 LINK rpc_client_test 00:02:58.835 LINK jsoncat 00:02:58.835 CXX test/cpp_headers/keyring_module.o 00:02:58.835 CXX test/cpp_headers/likely.o 00:02:58.835 CXX test/cpp_headers/log.o 00:02:58.835 LINK spdk_nvme_discover 00:02:58.835 LINK vtophys 00:02:58.835 CXX test/cpp_headers/lvol.o 00:02:58.835 LINK histogram_perf 00:02:58.835 CXX test/cpp_headers/memory.o 00:02:58.835 CXX test/cpp_headers/mmio.o 00:02:58.835 CXX test/cpp_headers/nbd.o 00:02:58.835 CXX test/cpp_headers/net.o 00:02:58.835 CXX test/cpp_headers/notify.o 00:02:58.835 CXX test/cpp_headers/nvme.o 00:02:58.835 CXX test/cpp_headers/nvme_intel.o 00:02:58.835 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.835 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.835 CXX test/cpp_headers/nvme_spec.o 00:02:58.835 LINK spdk_trace_record 00:02:58.835 CXX test/cpp_headers/nvme_zns.o 00:02:58.835 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.835 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.835 CXX test/cpp_headers/nvmf.o 00:02:58.835 CXX test/cpp_headers/nvmf_spec.o 00:02:58.835 CXX test/cpp_headers/nvmf_transport.o 00:02:58.835 CXX test/cpp_headers/opal.o 00:02:58.835 LINK poller_perf 00:02:58.835 CXX test/cpp_headers/opal_spec.o 00:02:58.835 LINK env_dpdk_post_init 00:02:58.835 CXX test/cpp_headers/pci_ids.o 00:02:58.835 CXX test/cpp_headers/pipe.o 00:02:58.835 CXX test/cpp_headers/queue.o 00:02:58.835 CXX test/cpp_headers/reduce.o 00:02:58.835 LINK zipf 00:02:58.835 CXX test/cpp_headers/rpc.o 00:02:59.099 CXX test/cpp_headers/scheduler.o 00:02:59.099 CXX test/cpp_headers/scsi.o 00:02:59.099 CXX test/cpp_headers/scsi_spec.o 00:02:59.099 CXX test/cpp_headers/sock.o 00:02:59.099 LINK interrupt_tgt 00:02:59.099 CXX test/cpp_headers/stdinc.o 00:02:59.099 CXX test/cpp_headers/thread.o 00:02:59.099 CXX test/cpp_headers/string.o 00:02:59.099 LINK iscsi_tgt 00:02:59.099 CXX test/cpp_headers/trace.o 00:02:59.099 CXX test/cpp_headers/trace_parser.o 00:02:59.099 CXX test/cpp_headers/tree.o 00:02:59.099 LINK stub 00:02:59.099 CXX test/cpp_headers/ublk.o 00:02:59.099 LINK nvmf_tgt 00:02:59.099 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.099 CXX test/cpp_headers/util.o 00:02:59.099 LINK verify 00:02:59.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.099 LINK bdev_svc 00:02:59.099 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:59.099 struct spdk_nvme_fdp_ruhs ruhs; 00:02:59.099 ^ 00:02:59.099 CXX test/cpp_headers/uuid.o 00:02:59.099 LINK spdk_trace 00:02:59.099 LINK ioat_perf 00:02:59.099 CXX test/cpp_headers/version.o 00:02:59.099 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.099 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.099 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:59.099 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:59.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.099 CXX test/cpp_headers/vhost.o 00:02:59.099 LINK spdk_tgt 00:02:59.099 CXX test/cpp_headers/vmd.o 00:02:59.099 CXX test/cpp_headers/xor.o 00:02:59.099 CXX test/cpp_headers/zipf.o 00:02:59.356 LINK test_dma 00:02:59.356 LINK pci_ut 00:02:59.356 LINK nvme_fuzz 00:02:59.356 LINK spdk_dd 00:02:59.356 1 warning generated. 00:02:59.614 LINK spdk_nvme_identify 00:02:59.614 LINK spdk_nvme 00:02:59.614 LINK spdk_bdev 00:02:59.614 LINK llvm_vfio_fuzz 00:02:59.614 LINK vhost_fuzz 00:02:59.614 LINK spdk_nvme_perf 00:02:59.614 LINK mem_callbacks 00:02:59.614 CC app/vhost/vhost.o 00:02:59.872 CC examples/vmd/led/led.o 00:02:59.872 LINK spdk_top 00:02:59.872 CC examples/idxd/perf/perf.o 00:02:59.872 CC examples/sock/hello_world/hello_sock.o 00:02:59.872 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.872 CC examples/thread/thread/thread_ex.o 00:02:59.872 LINK llvm_nvme_fuzz 00:02:59.872 LINK vhost 00:02:59.872 LINK led 00:02:59.872 LINK lsvmd 00:03:00.141 LINK memory_ut 00:03:00.141 LINK hello_sock 00:03:00.141 LINK thread 00:03:00.141 LINK idxd_perf 00:03:00.141 LINK spdk_lock 00:03:00.707 LINK iscsi_fuzz 00:03:00.965 CC examples/nvme/hotplug/hotplug.o 00:03:00.965 CC examples/nvme/hello_world/hello_world.o 00:03:00.965 CC examples/nvme/abort/abort.o 00:03:00.965 CC examples/nvme/arbitration/arbitration.o 00:03:00.965 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.965 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.965 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.965 CC examples/nvme/reconnect/reconnect.o 00:03:00.965 CC test/event/reactor_perf/reactor_perf.o 00:03:00.965 CC test/event/reactor/reactor.o 00:03:00.965 CC test/event/event_perf/event_perf.o 00:03:00.965 CC test/event/app_repeat/app_repeat.o 00:03:00.965 CC test/event/scheduler/scheduler.o 00:03:00.965 LINK pmr_persistence 00:03:01.223 LINK cmb_copy 00:03:01.223 LINK hello_world 00:03:01.223 LINK hotplug 00:03:01.223 LINK reactor 00:03:01.223 LINK reactor_perf 00:03:01.223 LINK event_perf 00:03:01.223 LINK app_repeat 00:03:01.223 LINK reconnect 00:03:01.223 LINK arbitration 00:03:01.223 LINK abort 00:03:01.223 LINK scheduler 00:03:01.223 LINK nvme_manage 00:03:01.790 CC test/nvme/sgl/sgl.o 00:03:01.790 CC test/nvme/overhead/overhead.o 00:03:01.790 CC test/nvme/err_injection/err_injection.o 00:03:01.790 CC test/nvme/reset/reset.o 00:03:01.790 CC test/nvme/aer/aer.o 00:03:01.790 CC test/nvme/startup/startup.o 00:03:01.790 CC test/nvme/e2edp/nvme_dp.o 00:03:01.790 CC test/nvme/fused_ordering/fused_ordering.o 00:03:01.790 CC test/nvme/simple_copy/simple_copy.o 00:03:01.790 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:01.790 CC test/nvme/reserve/reserve.o 00:03:01.790 CC test/nvme/connect_stress/connect_stress.o 00:03:01.790 CC test/nvme/fdp/fdp.o 00:03:01.790 CC test/nvme/boot_partition/boot_partition.o 00:03:01.790 CC test/nvme/cuse/cuse.o 00:03:01.790 CC test/nvme/compliance/nvme_compliance.o 00:03:01.790 CC test/blobfs/mkfs/mkfs.o 00:03:01.790 CC test/accel/dif/dif.o 00:03:01.790 CC test/lvol/esnap/esnap.o 00:03:01.790 LINK startup 00:03:01.790 LINK boot_partition 00:03:01.790 LINK doorbell_aers 00:03:01.791 LINK err_injection 00:03:01.791 LINK connect_stress 00:03:01.791 LINK fused_ordering 00:03:01.791 LINK reserve 00:03:01.791 LINK nvme_dp 00:03:01.791 LINK overhead 00:03:01.791 LINK simple_copy 00:03:01.791 LINK reset 00:03:01.791 LINK sgl 00:03:01.791 LINK aer 00:03:02.048 LINK mkfs 00:03:02.048 LINK fdp 00:03:02.048 LINK nvme_compliance 00:03:02.048 LINK dif 00:03:02.306 CC examples/accel/perf/accel_perf.o 00:03:02.306 CC examples/blob/hello_world/hello_blob.o 00:03:02.306 CC examples/blob/cli/blobcli.o 00:03:02.564 LINK hello_blob 00:03:02.565 LINK accel_perf 00:03:02.824 LINK blobcli 00:03:02.824 LINK cuse 00:03:03.761 CC examples/bdev/hello_world/hello_bdev.o 00:03:03.761 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.761 LINK hello_bdev 00:03:04.380 LINK bdevperf 00:03:04.380 CC test/bdev/bdevio/bdevio.o 00:03:04.639 LINK bdevio 00:03:06.540 CC examples/nvmf/nvmf/nvmf.o 00:03:06.540 LINK esnap 00:03:06.540 LINK nvmf 00:03:09.070 00:03:09.070 real 0m53.786s 00:03:09.070 user 7m27.303s 00:03:09.070 sys 2m40.881s 00:03:09.070 18:18:26 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:09.070 18:18:26 make -- common/autotest_common.sh@10 -- $ set +x 00:03:09.070 ************************************ 00:03:09.070 END TEST make 00:03:09.070 ************************************ 00:03:09.070 18:18:26 -- common/autotest_common.sh@1142 -- $ return 0 00:03:09.070 18:18:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.070 18:18:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:09.070 18:18:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:09.070 18:18:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.070 18:18:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:09.070 18:18:26 -- pm/common@44 -- $ pid=3686183 00:03:09.070 18:18:26 -- pm/common@50 -- $ kill -TERM 3686183 00:03:09.070 18:18:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.070 18:18:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.070 18:18:26 -- pm/common@44 -- $ pid=3686185 00:03:09.070 18:18:26 -- pm/common@50 -- $ kill -TERM 3686185 00:03:09.070 18:18:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.070 18:18:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:09.070 18:18:26 -- pm/common@44 -- $ pid=3686187 00:03:09.070 18:18:26 -- pm/common@50 -- $ kill -TERM 3686187 00:03:09.070 18:18:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.070 18:18:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:09.070 18:18:26 -- pm/common@44 -- $ pid=3686210 00:03:09.070 18:18:26 -- pm/common@50 -- $ sudo -E kill -TERM 3686210 00:03:09.070 18:18:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:09.070 18:18:26 -- nvmf/common.sh@7 -- # uname -s 00:03:09.070 18:18:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:09.070 18:18:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:09.070 18:18:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:09.070 18:18:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:09.070 18:18:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:09.071 18:18:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:09.071 18:18:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:09.071 18:18:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:09.071 18:18:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:09.071 18:18:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:09.071 18:18:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:03:09.071 18:18:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:03:09.071 18:18:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:09.071 18:18:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:09.071 18:18:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:09.071 18:18:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:09.071 18:18:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:09.071 18:18:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:09.071 18:18:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:09.071 18:18:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:09.071 18:18:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.071 18:18:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.071 18:18:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.071 18:18:26 -- paths/export.sh@5 -- # export PATH 00:03:09.071 18:18:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.071 18:18:26 -- nvmf/common.sh@47 -- # : 0 00:03:09.071 18:18:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:09.071 18:18:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:09.071 18:18:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:09.071 18:18:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:09.071 18:18:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:09.071 18:18:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:09.071 18:18:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:09.071 18:18:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:09.071 18:18:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:09.071 18:18:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:09.071 18:18:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:09.071 18:18:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:09.071 18:18:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:09.071 18:18:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:09.071 18:18:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:09.071 18:18:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.071 18:18:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.071 18:18:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:09.071 18:18:26 -- spdk/autotest.sh@48 -- # udevadm_pid=3746093 00:03:09.071 18:18:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:09.071 18:18:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:09.071 18:18:26 -- pm/common@17 -- # local monitor 00:03:09.071 18:18:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.071 18:18:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.071 18:18:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.071 18:18:26 -- pm/common@21 -- # date +%s 00:03:09.071 18:18:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.071 18:18:26 -- pm/common@21 -- # date +%s 00:03:09.071 18:18:26 -- pm/common@25 -- # sleep 1 00:03:09.071 18:18:26 -- pm/common@21 -- # date +%s 00:03:09.071 18:18:26 -- pm/common@21 -- # date +%s 00:03:09.071 18:18:26 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721578706 00:03:09.071 18:18:26 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721578706 00:03:09.071 18:18:26 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721578706 00:03:09.071 18:18:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721578706 00:03:09.071 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721578706_collect-vmstat.pm.log 00:03:09.071 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721578706_collect-cpu-load.pm.log 00:03:09.071 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721578706_collect-cpu-temp.pm.log 00:03:09.071 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721578706_collect-bmc-pm.bmc.pm.log 00:03:10.007 18:18:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.007 18:18:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.007 18:18:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:10.007 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:03:10.007 18:18:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.007 18:18:27 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:10.007 18:18:27 -- common/autotest_common.sh@10 -- # set +x 00:03:10.007 18:18:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:10.007 18:18:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:10.007 18:18:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:10.007 18:18:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:10.007 18:18:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:10.007 18:18:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.007 18:18:27 -- common/autotest_common.sh@1455 -- # uname 00:03:10.007 18:18:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:10.007 18:18:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.007 18:18:27 -- common/autotest_common.sh@1475 -- # uname 00:03:10.007 18:18:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:10.007 18:18:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:10.007 18:18:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:10.007 18:18:28 -- spdk/autotest.sh@72 -- # hash lcov 00:03:10.007 18:18:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:10.007 18:18:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:10.007 18:18:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:10.007 18:18:28 -- common/autotest_common.sh@10 -- # set +x 00:03:10.007 18:18:28 -- spdk/autotest.sh@91 -- # rm -f 00:03:10.007 18:18:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.198 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:03:14.198 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.198 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:16.147 18:18:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:16.147 18:18:34 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:16.147 18:18:34 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:16.147 18:18:34 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:16.147 18:18:34 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:16.147 18:18:34 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:16.147 18:18:34 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:16.147 18:18:34 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.147 18:18:34 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:16.147 18:18:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:16.147 18:18:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:16.147 18:18:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:16.147 18:18:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:16.147 18:18:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:16.147 18:18:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:16.147 No valid GPT data, bailing 00:03:16.147 18:18:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:16.147 18:18:34 -- scripts/common.sh@391 -- # pt= 00:03:16.147 18:18:34 -- scripts/common.sh@392 -- # return 1 00:03:16.147 18:18:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:16.147 1+0 records in 00:03:16.147 1+0 records out 00:03:16.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704253 s, 149 MB/s 00:03:16.147 18:18:34 -- spdk/autotest.sh@118 -- # sync 00:03:16.147 18:18:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:16.147 18:18:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:16.147 18:18:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.424 18:18:38 -- spdk/autotest.sh@124 -- # uname -s 00:03:21.424 18:18:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:21.424 18:18:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:21.424 18:18:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.424 18:18:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.424 18:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:21.424 ************************************ 00:03:21.424 START TEST setup.sh 00:03:21.424 ************************************ 00:03:21.424 18:18:38 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:21.424 * Looking for test storage... 00:03:21.424 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:21.424 18:18:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:21.424 18:18:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:21.424 18:18:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:21.424 18:18:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.424 18:18:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.424 18:18:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:21.424 ************************************ 00:03:21.424 START TEST acl 00:03:21.424 ************************************ 00:03:21.424 18:18:38 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:21.424 * Looking for test storage... 00:03:21.424 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:21.424 18:18:39 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.424 18:18:39 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:21.424 18:18:39 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:21.424 18:18:39 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:21.424 18:18:39 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:21.424 18:18:39 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:21.424 18:18:39 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:21.424 18:18:39 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.424 18:18:39 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.991 18:18:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:27.991 18:18:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:27.991 18:18:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.991 18:18:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:27.991 18:18:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.991 18:18:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:31.274 Hugepages 00:03:31.274 node hugesize free / total 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 00:03:31.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:31.274 18:18:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.274 18:18:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.274 18:18:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.274 18:18:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.274 ************************************ 00:03:31.274 START TEST denied 00:03:31.274 ************************************ 00:03:31.274 18:18:49 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:31.274 18:18:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:03:31.274 18:18:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:03:31.274 18:18:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:31.274 18:18:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.274 18:18:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:37.866 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.866 18:18:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.472 00:03:44.472 real 0m13.418s 00:03:44.472 user 0m4.122s 00:03:44.472 sys 0m8.470s 00:03:44.472 18:19:02 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.472 18:19:02 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:44.472 ************************************ 00:03:44.472 END TEST denied 00:03:44.472 ************************************ 00:03:44.472 18:19:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:44.472 18:19:02 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:44.472 18:19:02 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.472 18:19:02 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.472 18:19:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.472 ************************************ 00:03:44.472 START TEST allowed 00:03:44.472 ************************************ 00:03:44.472 18:19:02 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:44.472 18:19:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:03:44.472 18:19:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:44.472 18:19:02 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:03:44.472 18:19:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.472 18:19:02 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.445 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.445 18:19:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:54.445 18:19:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:54.445 18:19:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:54.445 18:19:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.445 18:19:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.717 00:03:59.717 real 0m14.844s 00:03:59.717 user 0m3.344s 00:03:59.717 sys 0m8.091s 00:03:59.717 18:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.717 18:19:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:59.717 ************************************ 00:03:59.717 END TEST allowed 00:03:59.717 ************************************ 00:03:59.717 18:19:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:59.717 00:03:59.717 real 0m38.470s 00:03:59.717 user 0m10.695s 00:03:59.717 sys 0m23.693s 00:03:59.717 18:19:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.717 18:19:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:59.717 ************************************ 00:03:59.717 END TEST acl 00:03:59.717 ************************************ 00:03:59.717 18:19:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.717 18:19:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.717 18:19:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.717 18:19:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.717 18:19:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.717 ************************************ 00:03:59.717 START TEST hugepages 00:03:59.717 ************************************ 00:03:59.717 18:19:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.717 * Looking for test storage... 00:03:59.717 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 72541880 kB' 'MemAvailable: 76045240 kB' 'Buffers: 4292 kB' 'Cached: 12237972 kB' 'SwapCached: 0 kB' 'Active: 9318660 kB' 'Inactive: 3536596 kB' 'Active(anon): 8810736 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616176 kB' 'Mapped: 170684 kB' 'Shmem: 8197744 kB' 'KReclaimable: 224928 kB' 'Slab: 614012 kB' 'SReclaimable: 224928 kB' 'SUnreclaim: 389084 kB' 'KernelStack: 16496 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 10271512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.717 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.718 18:19:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.719 18:19:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.719 18:19:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.719 18:19:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.719 18:19:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.719 ************************************ 00:03:59.719 START TEST default_setup 00:03:59.719 ************************************ 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.719 18:19:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:03.901 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.901 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.435 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74752672 kB' 'MemAvailable: 78256000 kB' 'Buffers: 4292 kB' 'Cached: 12238132 kB' 'SwapCached: 0 kB' 'Active: 9335052 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827128 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632512 kB' 'Mapped: 170136 kB' 'Shmem: 8197904 kB' 'KReclaimable: 224864 kB' 'Slab: 613164 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388300 kB' 'KernelStack: 16416 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10284140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214408 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74752020 kB' 'MemAvailable: 78255348 kB' 'Buffers: 4292 kB' 'Cached: 12238136 kB' 'SwapCached: 0 kB' 'Active: 9335508 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827584 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632972 kB' 'Mapped: 170064 kB' 'Shmem: 8197908 kB' 'KReclaimable: 224864 kB' 'Slab: 613164 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388300 kB' 'KernelStack: 16480 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10284156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.966 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74751016 kB' 'MemAvailable: 78254344 kB' 'Buffers: 4292 kB' 'Cached: 12238156 kB' 'SwapCached: 0 kB' 'Active: 9335536 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827612 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632972 kB' 'Mapped: 170064 kB' 'Shmem: 8197928 kB' 'KReclaimable: 224864 kB' 'Slab: 613164 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388300 kB' 'KernelStack: 16480 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10284176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.967 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.968 nr_hugepages=1024 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.968 resv_hugepages=0 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.968 surplus_hugepages=0 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.968 anon_hugepages=0 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74749932 kB' 'MemAvailable: 78253260 kB' 'Buffers: 4292 kB' 'Cached: 12238196 kB' 'SwapCached: 0 kB' 'Active: 9335216 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827292 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632576 kB' 'Mapped: 170064 kB' 'Shmem: 8197968 kB' 'KReclaimable: 224864 kB' 'Slab: 613164 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388300 kB' 'KernelStack: 16464 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10284200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40014972 kB' 'MemUsed: 8101996 kB' 'SwapCached: 0 kB' 'Active: 3888224 kB' 'Inactive: 130876 kB' 'Active(anon): 3510604 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3537296 kB' 'Mapped: 116644 kB' 'AnonPages: 484976 kB' 'Shmem: 3028800 kB' 'KernelStack: 8696 kB' 'PageTables: 6368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 341360 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 223372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.970 node0=1024 expecting 1024 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.970 00:04:08.970 real 0m9.138s 00:04:08.970 user 0m1.949s 00:04:08.970 sys 0m4.012s 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.970 18:19:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:08.970 ************************************ 00:04:08.970 END TEST default_setup 00:04:08.970 ************************************ 00:04:08.970 18:19:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.970 18:19:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:08.970 18:19:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.970 18:19:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.970 18:19:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.970 ************************************ 00:04:08.970 START TEST per_node_1G_alloc 00:04:08.970 ************************************ 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.970 18:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:13.155 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.155 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.155 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.156 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.156 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.156 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.156 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.156 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74786680 kB' 'MemAvailable: 78290008 kB' 'Buffers: 4292 kB' 'Cached: 12238304 kB' 'SwapCached: 0 kB' 'Active: 9334280 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826356 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631492 kB' 'Mapped: 169256 kB' 'Shmem: 8198076 kB' 'KReclaimable: 224864 kB' 'Slab: 613556 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388692 kB' 'KernelStack: 16384 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10274208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214296 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.615 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74785424 kB' 'MemAvailable: 78288752 kB' 'Buffers: 4292 kB' 'Cached: 12238304 kB' 'SwapCached: 0 kB' 'Active: 9334120 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826196 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631296 kB' 'Mapped: 169240 kB' 'Shmem: 8198076 kB' 'KReclaimable: 224864 kB' 'Slab: 613556 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388692 kB' 'KernelStack: 16368 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10274224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.616 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.617 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74785328 kB' 'MemAvailable: 78288656 kB' 'Buffers: 4292 kB' 'Cached: 12238308 kB' 'SwapCached: 0 kB' 'Active: 9334424 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826500 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631624 kB' 'Mapped: 169240 kB' 'Shmem: 8198080 kB' 'KReclaimable: 224864 kB' 'Slab: 613548 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388684 kB' 'KernelStack: 16384 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10275244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.618 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.619 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.620 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.882 nr_hugepages=1024 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.882 resv_hugepages=0 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.882 surplus_hugepages=0 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.882 anon_hugepages=0 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74784900 kB' 'MemAvailable: 78288228 kB' 'Buffers: 4292 kB' 'Cached: 12238328 kB' 'SwapCached: 0 kB' 'Active: 9335336 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827412 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632532 kB' 'Mapped: 169240 kB' 'Shmem: 8198100 kB' 'KReclaimable: 224864 kB' 'Slab: 613540 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 388676 kB' 'KernelStack: 16368 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10276840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214296 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.882 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.883 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 41086696 kB' 'MemUsed: 7030272 kB' 'SwapCached: 0 kB' 'Active: 3887476 kB' 'Inactive: 130876 kB' 'Active(anon): 3509856 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3537388 kB' 'Mapped: 116040 kB' 'AnonPages: 484072 kB' 'Shmem: 3028892 kB' 'KernelStack: 8632 kB' 'PageTables: 6104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 341096 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 223108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.884 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 33696836 kB' 'MemUsed: 10479724 kB' 'SwapCached: 0 kB' 'Active: 5447348 kB' 'Inactive: 3405720 kB' 'Active(anon): 5317044 kB' 'Inactive(anon): 0 kB' 'Active(file): 130304 kB' 'Inactive(file): 3405720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8705232 kB' 'Mapped: 53200 kB' 'AnonPages: 147892 kB' 'Shmem: 5169208 kB' 'KernelStack: 7848 kB' 'PageTables: 2836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106876 kB' 'Slab: 272444 kB' 'SReclaimable: 106876 kB' 'SUnreclaim: 165568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.885 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.886 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.887 node0=512 expecting 512 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:14.887 node1=512 expecting 512 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:14.887 00:04:14.887 real 0m5.974s 00:04:14.887 user 0m2.116s 00:04:14.887 sys 0m3.924s 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.887 18:19:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.887 ************************************ 00:04:14.887 END TEST per_node_1G_alloc 00:04:14.887 ************************************ 00:04:14.887 18:19:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:14.887 18:19:32 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:14.887 18:19:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.887 18:19:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.887 18:19:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.887 ************************************ 00:04:14.887 START TEST even_2G_alloc 00:04:14.887 ************************************ 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.887 18:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:14.887 18:19:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:14.887 18:19:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:14.887 18:19:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.887 18:19:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:19.075 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.075 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.075 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.984 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74774804 kB' 'MemAvailable: 78278132 kB' 'Buffers: 4292 kB' 'Cached: 12238488 kB' 'SwapCached: 0 kB' 'Active: 9334596 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826672 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631672 kB' 'Mapped: 169380 kB' 'Shmem: 8198260 kB' 'KReclaimable: 224864 kB' 'Slab: 612832 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 387968 kB' 'KernelStack: 16480 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10274548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214344 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.985 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74774304 kB' 'MemAvailable: 78277632 kB' 'Buffers: 4292 kB' 'Cached: 12238492 kB' 'SwapCached: 0 kB' 'Active: 9335356 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827432 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632456 kB' 'Mapped: 169364 kB' 'Shmem: 8198264 kB' 'KReclaimable: 224864 kB' 'Slab: 612832 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 387968 kB' 'KernelStack: 16560 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10274696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214312 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.987 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74773808 kB' 'MemAvailable: 78277136 kB' 'Buffers: 4292 kB' 'Cached: 12238520 kB' 'SwapCached: 0 kB' 'Active: 9335568 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827644 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632644 kB' 'Mapped: 169364 kB' 'Shmem: 8198292 kB' 'KReclaimable: 224864 kB' 'Slab: 612832 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 387968 kB' 'KernelStack: 16576 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10275088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.988 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.989 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.990 nr_hugepages=1024 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.990 resv_hugepages=0 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.990 surplus_hugepages=0 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.990 anon_hugepages=0 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74773824 kB' 'MemAvailable: 78277152 kB' 'Buffers: 4292 kB' 'Cached: 12238536 kB' 'SwapCached: 0 kB' 'Active: 9335752 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827828 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632816 kB' 'Mapped: 169364 kB' 'Shmem: 8198308 kB' 'KReclaimable: 224864 kB' 'Slab: 612832 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 387968 kB' 'KernelStack: 16576 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10275108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.990 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.991 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 41090772 kB' 'MemUsed: 7026196 kB' 'SwapCached: 0 kB' 'Active: 3888080 kB' 'Inactive: 130876 kB' 'Active(anon): 3510460 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3537580 kB' 'Mapped: 116104 kB' 'AnonPages: 484496 kB' 'Shmem: 3029084 kB' 'KernelStack: 8712 kB' 'PageTables: 6356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 340868 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 222880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.992 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.993 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 33683052 kB' 'MemUsed: 10493508 kB' 'SwapCached: 0 kB' 'Active: 5448012 kB' 'Inactive: 3405720 kB' 'Active(anon): 5317708 kB' 'Inactive(anon): 0 kB' 'Active(file): 130304 kB' 'Inactive(file): 3405720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8705272 kB' 'Mapped: 53260 kB' 'AnonPages: 148640 kB' 'Shmem: 5169248 kB' 'KernelStack: 7880 kB' 'PageTables: 2896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106876 kB' 'Slab: 271964 kB' 'SReclaimable: 106876 kB' 'SUnreclaim: 165088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:20.995 node0=512 expecting 512 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:20.995 node1=512 expecting 512 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:20.995 00:04:20.995 real 0m6.063s 00:04:20.995 user 0m2.087s 00:04:20.995 sys 0m4.046s 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.995 18:19:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.995 ************************************ 00:04:20.995 END TEST even_2G_alloc 00:04:20.995 ************************************ 00:04:20.995 18:19:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:20.995 18:19:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:20.995 18:19:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.995 18:19:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.995 18:19:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.995 ************************************ 00:04:20.995 START TEST odd_alloc 00:04:20.995 ************************************ 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.995 18:19:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:25.182 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.182 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.182 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74793824 kB' 'MemAvailable: 78297152 kB' 'Buffers: 4292 kB' 'Cached: 12238692 kB' 'SwapCached: 0 kB' 'Active: 9333644 kB' 'Inactive: 3536596 kB' 'Active(anon): 8825720 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630464 kB' 'Mapped: 169524 kB' 'Shmem: 8198464 kB' 'KReclaimable: 224864 kB' 'Slab: 611704 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 386840 kB' 'KernelStack: 16368 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 10275916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214344 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.087 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.088 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74793772 kB' 'MemAvailable: 78297100 kB' 'Buffers: 4292 kB' 'Cached: 12238696 kB' 'SwapCached: 0 kB' 'Active: 9334180 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826256 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631092 kB' 'Mapped: 169524 kB' 'Shmem: 8198468 kB' 'KReclaimable: 224864 kB' 'Slab: 611704 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 386840 kB' 'KernelStack: 16432 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 10275932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.089 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.090 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74793924 kB' 'MemAvailable: 78297252 kB' 'Buffers: 4292 kB' 'Cached: 12238712 kB' 'SwapCached: 0 kB' 'Active: 9334212 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826288 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631092 kB' 'Mapped: 169524 kB' 'Shmem: 8198484 kB' 'KReclaimable: 224864 kB' 'Slab: 611704 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 386840 kB' 'KernelStack: 16432 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 10275952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.091 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:27.092 nr_hugepages=1025 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.092 resv_hugepages=0 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.092 surplus_hugepages=0 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.092 anon_hugepages=0 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.092 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74793424 kB' 'MemAvailable: 78296752 kB' 'Buffers: 4292 kB' 'Cached: 12238732 kB' 'SwapCached: 0 kB' 'Active: 9334544 kB' 'Inactive: 3536596 kB' 'Active(anon): 8826620 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631428 kB' 'Mapped: 169524 kB' 'Shmem: 8198504 kB' 'KReclaimable: 224864 kB' 'Slab: 611704 kB' 'SReclaimable: 224864 kB' 'SUnreclaim: 386840 kB' 'KernelStack: 16448 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 10275972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.093 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 41100084 kB' 'MemUsed: 7016884 kB' 'SwapCached: 0 kB' 'Active: 3887200 kB' 'Inactive: 130876 kB' 'Active(anon): 3509580 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3537724 kB' 'Mapped: 116644 kB' 'AnonPages: 483536 kB' 'Shmem: 3029228 kB' 'KernelStack: 8680 kB' 'PageTables: 6240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 340284 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 222296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.094 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.095 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 33692716 kB' 'MemUsed: 10483844 kB' 'SwapCached: 0 kB' 'Active: 5450944 kB' 'Inactive: 3405720 kB' 'Active(anon): 5320640 kB' 'Inactive(anon): 0 kB' 'Active(file): 130304 kB' 'Inactive(file): 3405720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8705320 kB' 'Mapped: 53384 kB' 'AnonPages: 151468 kB' 'Shmem: 5169296 kB' 'KernelStack: 7768 kB' 'PageTables: 2592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106876 kB' 'Slab: 271420 kB' 'SReclaimable: 106876 kB' 'SUnreclaim: 164544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.096 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:27.097 node0=512 expecting 513 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:27.097 node1=513 expecting 512 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:27.097 00:04:27.097 real 0m5.881s 00:04:27.097 user 0m2.043s 00:04:27.097 sys 0m3.900s 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.097 18:19:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.097 ************************************ 00:04:27.097 END TEST odd_alloc 00:04:27.097 ************************************ 00:04:27.097 18:19:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.097 18:19:45 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:27.097 18:19:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.097 18:19:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.097 18:19:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.097 ************************************ 00:04:27.097 START TEST custom_alloc 00:04:27.097 ************************************ 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.097 18:19:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:31.289 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.289 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.289 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:33.191 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:33.191 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:33.191 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73722056 kB' 'MemAvailable: 77225352 kB' 'Buffers: 4292 kB' 'Cached: 12238880 kB' 'SwapCached: 0 kB' 'Active: 9335308 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827384 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632000 kB' 'Mapped: 169516 kB' 'Shmem: 8198652 kB' 'KReclaimable: 224800 kB' 'Slab: 611700 kB' 'SReclaimable: 224800 kB' 'SUnreclaim: 386900 kB' 'KernelStack: 16448 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 10276876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214376 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.192 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73723096 kB' 'MemAvailable: 77226392 kB' 'Buffers: 4292 kB' 'Cached: 12238880 kB' 'SwapCached: 0 kB' 'Active: 9335016 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827092 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631672 kB' 'Mapped: 169500 kB' 'Shmem: 8198652 kB' 'KReclaimable: 224800 kB' 'Slab: 611700 kB' 'SReclaimable: 224800 kB' 'SUnreclaim: 386900 kB' 'KernelStack: 16448 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 10276892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214360 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.193 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.194 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73723140 kB' 'MemAvailable: 77226420 kB' 'Buffers: 4292 kB' 'Cached: 12238904 kB' 'SwapCached: 0 kB' 'Active: 9334944 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827020 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631560 kB' 'Mapped: 169500 kB' 'Shmem: 8198676 kB' 'KReclaimable: 224768 kB' 'Slab: 611668 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386900 kB' 'KernelStack: 16448 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 10276912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214360 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.195 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.196 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:33.197 nr_hugepages=1536 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.197 resv_hugepages=0 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.197 surplus_hugepages=0 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.197 anon_hugepages=0 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73723708 kB' 'MemAvailable: 77226988 kB' 'Buffers: 4292 kB' 'Cached: 12238908 kB' 'SwapCached: 0 kB' 'Active: 9335152 kB' 'Inactive: 3536596 kB' 'Active(anon): 8827228 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631260 kB' 'Mapped: 169500 kB' 'Shmem: 8198680 kB' 'KReclaimable: 224768 kB' 'Slab: 611668 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386900 kB' 'KernelStack: 16448 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 10276936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214360 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.197 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.198 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 41101148 kB' 'MemUsed: 7015820 kB' 'SwapCached: 0 kB' 'Active: 3886684 kB' 'Inactive: 130876 kB' 'Active(anon): 3509064 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3537876 kB' 'Mapped: 116240 kB' 'AnonPages: 482880 kB' 'Shmem: 3029380 kB' 'KernelStack: 8632 kB' 'PageTables: 6044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 340000 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 222012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.199 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 32624092 kB' 'MemUsed: 11552468 kB' 'SwapCached: 0 kB' 'Active: 5448268 kB' 'Inactive: 3405720 kB' 'Active(anon): 5317964 kB' 'Inactive(anon): 0 kB' 'Active(file): 130304 kB' 'Inactive(file): 3405720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8705360 kB' 'Mapped: 53260 kB' 'AnonPages: 148752 kB' 'Shmem: 5169336 kB' 'KernelStack: 7768 kB' 'PageTables: 2644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106780 kB' 'Slab: 271668 kB' 'SReclaimable: 106780 kB' 'SUnreclaim: 164888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.200 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.201 node0=512 expecting 512 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:33.201 node1=1024 expecting 1024 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:33.201 00:04:33.201 real 0m6.030s 00:04:33.201 user 0m2.102s 00:04:33.201 sys 0m3.953s 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.201 18:19:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.201 ************************************ 00:04:33.201 END TEST custom_alloc 00:04:33.201 ************************************ 00:04:33.201 18:19:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.201 18:19:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:33.202 18:19:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.202 18:19:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.202 18:19:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.202 ************************************ 00:04:33.202 START TEST no_shrink_alloc 00:04:33.202 ************************************ 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.202 18:19:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:37.383 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.383 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.383 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74769904 kB' 'MemAvailable: 78273184 kB' 'Buffers: 4292 kB' 'Cached: 12239068 kB' 'SwapCached: 0 kB' 'Active: 9336056 kB' 'Inactive: 3536596 kB' 'Active(anon): 8828132 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632528 kB' 'Mapped: 169556 kB' 'Shmem: 8198840 kB' 'KReclaimable: 224768 kB' 'Slab: 610996 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386228 kB' 'KernelStack: 16464 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10277772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214456 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.775 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74770668 kB' 'MemAvailable: 78273948 kB' 'Buffers: 4292 kB' 'Cached: 12239072 kB' 'SwapCached: 0 kB' 'Active: 9336532 kB' 'Inactive: 3536596 kB' 'Active(anon): 8828608 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633012 kB' 'Mapped: 169492 kB' 'Shmem: 8198844 kB' 'KReclaimable: 224768 kB' 'Slab: 611032 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386264 kB' 'KernelStack: 16448 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10277792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214440 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.776 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74769748 kB' 'MemAvailable: 78273028 kB' 'Buffers: 4292 kB' 'Cached: 12239088 kB' 'SwapCached: 0 kB' 'Active: 9336656 kB' 'Inactive: 3536596 kB' 'Active(anon): 8828732 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633056 kB' 'Mapped: 169492 kB' 'Shmem: 8198860 kB' 'KReclaimable: 224768 kB' 'Slab: 611032 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386264 kB' 'KernelStack: 16432 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10300300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214456 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.777 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.040 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.041 nr_hugepages=1024 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.041 resv_hugepages=0 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.041 surplus_hugepages=0 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.041 anon_hugepages=0 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.041 18:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74769416 kB' 'MemAvailable: 78272696 kB' 'Buffers: 4292 kB' 'Cached: 12239112 kB' 'SwapCached: 0 kB' 'Active: 9336708 kB' 'Inactive: 3536596 kB' 'Active(anon): 8828784 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633224 kB' 'Mapped: 169492 kB' 'Shmem: 8198884 kB' 'KReclaimable: 224768 kB' 'Slab: 611032 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386264 kB' 'KernelStack: 16464 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10279596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214408 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.041 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.042 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40058212 kB' 'MemUsed: 8058756 kB' 'SwapCached: 0 kB' 'Active: 3887636 kB' 'Inactive: 130876 kB' 'Active(anon): 3510016 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538036 kB' 'Mapped: 116308 kB' 'AnonPages: 483636 kB' 'Shmem: 3029540 kB' 'KernelStack: 8616 kB' 'PageTables: 6100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 339904 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 221916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.043 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.044 node0=1024 expecting 1024 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.044 18:19:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:43.312 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.312 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.312 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:44.690 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74812256 kB' 'MemAvailable: 78315284 kB' 'Buffers: 4292 kB' 'Cached: 12239244 kB' 'SwapCached: 0 kB' 'Active: 9337936 kB' 'Inactive: 3536596 kB' 'Active(anon): 8830012 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634308 kB' 'Mapped: 169696 kB' 'Shmem: 8199008 kB' 'KReclaimable: 224768 kB' 'Slab: 611460 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386692 kB' 'KernelStack: 16432 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10281124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214360 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.690 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74811844 kB' 'MemAvailable: 78315124 kB' 'Buffers: 4292 kB' 'Cached: 12239248 kB' 'SwapCached: 0 kB' 'Active: 9337972 kB' 'Inactive: 3536596 kB' 'Active(anon): 8830048 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634404 kB' 'Mapped: 169632 kB' 'Shmem: 8199020 kB' 'KReclaimable: 224768 kB' 'Slab: 611460 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386692 kB' 'KernelStack: 16592 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10281512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.691 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.692 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74810464 kB' 'MemAvailable: 78313744 kB' 'Buffers: 4292 kB' 'Cached: 12239264 kB' 'SwapCached: 0 kB' 'Active: 9337620 kB' 'Inactive: 3536596 kB' 'Active(anon): 8829696 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634044 kB' 'Mapped: 169632 kB' 'Shmem: 8199036 kB' 'KReclaimable: 224768 kB' 'Slab: 611460 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386692 kB' 'KernelStack: 16432 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10280040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214424 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.693 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.694 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.995 nr_hugepages=1024 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.995 resv_hugepages=0 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.995 surplus_hugepages=0 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.995 anon_hugepages=0 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.995 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74809324 kB' 'MemAvailable: 78312604 kB' 'Buffers: 4292 kB' 'Cached: 12239288 kB' 'SwapCached: 0 kB' 'Active: 9337892 kB' 'Inactive: 3536596 kB' 'Active(anon): 8829968 kB' 'Inactive(anon): 0 kB' 'Active(file): 507924 kB' 'Inactive(file): 3536596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634224 kB' 'Mapped: 169632 kB' 'Shmem: 8199060 kB' 'KReclaimable: 224768 kB' 'Slab: 611460 kB' 'SReclaimable: 224768 kB' 'SUnreclaim: 386692 kB' 'KernelStack: 16480 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 10281556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 63360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1244608 kB' 'DirectMap2M: 26742784 kB' 'DirectMap1G: 73400320 kB' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.996 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40054968 kB' 'MemUsed: 8062000 kB' 'SwapCached: 0 kB' 'Active: 3887744 kB' 'Inactive: 130876 kB' 'Active(anon): 3510124 kB' 'Inactive(anon): 0 kB' 'Active(file): 377620 kB' 'Inactive(file): 130876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538168 kB' 'Mapped: 116372 kB' 'AnonPages: 483636 kB' 'Shmem: 3029672 kB' 'KernelStack: 8616 kB' 'PageTables: 6044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117988 kB' 'Slab: 340092 kB' 'SReclaimable: 117988 kB' 'SUnreclaim: 222104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.997 node0=1024 expecting 1024 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.997 00:04:44.997 real 0m11.793s 00:04:44.997 user 0m3.928s 00:04:44.997 sys 0m7.920s 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.997 18:20:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.997 ************************************ 00:04:44.997 END TEST no_shrink_alloc 00:04:44.997 ************************************ 00:04:44.997 18:20:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.997 18:20:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.997 00:04:44.997 real 0m45.528s 00:04:44.997 user 0m14.492s 00:04:44.997 sys 0m28.187s 00:04:44.997 18:20:03 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.997 18:20:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.997 ************************************ 00:04:44.997 END TEST hugepages 00:04:44.997 ************************************ 00:04:44.997 18:20:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:44.997 18:20:03 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:44.997 18:20:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.997 18:20:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.997 18:20:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.997 ************************************ 00:04:44.997 START TEST driver 00:04:44.997 ************************************ 00:04:44.997 18:20:03 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:45.255 * Looking for test storage... 00:04:45.255 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:45.255 18:20:03 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:45.255 18:20:03 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.255 18:20:03 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.387 18:20:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:53.387 18:20:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.387 18:20:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.387 18:20:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.387 ************************************ 00:04:53.387 START TEST guess_driver 00:04:53.387 ************************************ 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 238 > 0 )) 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:53.387 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:53.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:53.388 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:53.388 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:53.388 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:53.388 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:53.388 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:53.388 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:53.388 Looking for driver=vfio-pci 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.388 18:20:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.678 18:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.964 18:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.964 18:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.964 18:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.863 18:20:19 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:01.863 18:20:19 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:01.863 18:20:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.863 18:20:19 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.000 00:05:10.000 real 0m16.633s 00:05:10.000 user 0m4.187s 00:05:10.000 sys 0m8.627s 00:05:10.000 18:20:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.000 18:20:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.000 ************************************ 00:05:10.000 END TEST guess_driver 00:05:10.000 ************************************ 00:05:10.000 18:20:27 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:10.000 00:05:10.000 real 0m24.228s 00:05:10.000 user 0m6.468s 00:05:10.000 sys 0m13.140s 00:05:10.000 18:20:27 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.000 18:20:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.000 ************************************ 00:05:10.000 END TEST driver 00:05:10.000 ************************************ 00:05:10.000 18:20:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:10.000 18:20:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:10.000 18:20:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.000 18:20:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.000 18:20:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.000 ************************************ 00:05:10.000 START TEST devices 00:05:10.000 ************************************ 00:05:10.000 18:20:27 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:10.000 * Looking for test storage... 00:05:10.000 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:10.000 18:20:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.000 18:20:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:10.000 18:20:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.000 18:20:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.646 18:20:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:16.646 18:20:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:16.646 18:20:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:16.646 18:20:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:16.646 No valid GPT data, bailing 00:05:16.647 18:20:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:16.647 18:20:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:16.647 18:20:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:16.647 18:20:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:16.647 18:20:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:16.647 18:20:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:16.647 18:20:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:16.647 18:20:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.647 18:20:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.647 18:20:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:16.647 ************************************ 00:05:16.647 START TEST nvme_mount 00:05:16.647 ************************************ 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.647 18:20:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:16.908 Creating new GPT entries in memory. 00:05:16.908 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.908 other utilities. 00:05:16.908 18:20:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.908 18:20:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.908 18:20:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.908 18:20:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.908 18:20:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:17.844 Creating new GPT entries in memory. 00:05:17.844 The operation has completed successfully. 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3779812 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:17.844 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.102 18:20:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:22.287 18:20:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:24.192 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.192 18:20:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.192 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:24.192 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:24.192 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.192 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.192 18:20:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.379 18:20:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.281 18:20:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.569 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.570 18:20:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.101 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.101 00:05:36.101 real 0m19.742s 00:05:36.101 user 0m5.628s 00:05:36.101 sys 0m11.853s 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.101 18:20:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:36.101 ************************************ 00:05:36.101 END TEST nvme_mount 00:05:36.101 ************************************ 00:05:36.101 18:20:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:36.101 18:20:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:36.101 18:20:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.101 18:20:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.101 18:20:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:36.101 ************************************ 00:05:36.101 START TEST dm_mount 00:05:36.101 ************************************ 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:36.101 18:20:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:36.669 Creating new GPT entries in memory. 00:05:36.669 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:36.669 other utilities. 00:05:36.669 18:20:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:36.669 18:20:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.669 18:20:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.669 18:20:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.669 18:20:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:38.044 Creating new GPT entries in memory. 00:05:38.044 The operation has completed successfully. 00:05:38.044 18:20:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:38.044 18:20:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.044 18:20:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.044 18:20:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.044 18:20:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:38.983 The operation has completed successfully. 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3785124 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.983 18:20:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:43.172 18:21:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.105 18:21:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.293 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.294 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.294 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.294 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.294 18:21:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.199 18:21:08 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:51.199 18:21:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.199 18:21:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:51.199 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.199 18:21:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.199 18:21:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:51.199 00:05:51.199 real 0m15.272s 00:05:51.199 user 0m3.922s 00:05:51.199 sys 0m8.379s 00:05:51.199 18:21:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.199 18:21:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:51.199 ************************************ 00:05:51.199 END TEST dm_mount 00:05:51.199 ************************************ 00:05:51.199 18:21:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.199 18:21:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:51.199 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:51.200 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:51.200 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:51.200 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.200 18:21:09 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:51.200 00:05:51.200 real 0m41.973s 00:05:51.200 user 0m11.798s 00:05:51.200 sys 0m24.838s 00:05:51.200 18:21:09 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.200 18:21:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:51.200 ************************************ 00:05:51.200 END TEST devices 00:05:51.200 ************************************ 00:05:51.459 18:21:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:51.459 00:05:51.459 real 2m30.609s 00:05:51.459 user 0m43.581s 00:05:51.459 sys 1m30.169s 00:05:51.459 18:21:09 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.459 18:21:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:51.459 ************************************ 00:05:51.459 END TEST setup.sh 00:05:51.459 ************************************ 00:05:51.459 18:21:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.459 18:21:09 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:55.650 Hugepages 00:05:55.650 node hugesize free / total 00:05:55.650 node0 1048576kB 0 / 0 00:05:55.650 node0 2048kB 2048 / 2048 00:05:55.650 node1 1048576kB 0 / 0 00:05:55.650 node1 2048kB 0 / 0 00:05:55.650 00:05:55.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.650 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:55.650 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:55.650 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:55.650 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:55.650 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:55.650 18:21:13 -- spdk/autotest.sh@130 -- # uname -s 00:05:55.650 18:21:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:55.650 18:21:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:55.650 18:21:13 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:59.841 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:59.841 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:03.132 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:05.073 18:21:22 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:06.010 18:21:23 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:06.010 18:21:23 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:06.010 18:21:23 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:06.010 18:21:23 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:06.010 18:21:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:06.010 18:21:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:06.010 18:21:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.010 18:21:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:06.010 18:21:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:06.010 18:21:23 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:06.010 18:21:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:06:06.010 18:21:23 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:10.202 Waiting for block devices as requested 00:06:10.202 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:06:10.202 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:10.202 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:10.202 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:10.202 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:10.202 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:10.203 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:10.203 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:10.203 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:10.464 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:10.464 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:10.464 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:10.722 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:10.722 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:10.722 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:10.980 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:10.980 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:12.914 18:21:31 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:12.914 18:21:31 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1502 -- # grep 0000:1a:00.0/nvme/nvme 00:06:12.914 18:21:31 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:06:12.914 18:21:31 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:12.914 18:21:31 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:12.914 18:21:31 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:12.914 18:21:31 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:12.914 18:21:31 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:12.914 18:21:31 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:12.914 18:21:31 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:13.172 18:21:31 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:13.172 18:21:31 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:13.172 18:21:31 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:13.172 18:21:31 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:13.172 18:21:31 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:13.172 18:21:31 -- common/autotest_common.sh@1557 -- # continue 00:06:13.172 18:21:31 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:13.172 18:21:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.172 18:21:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.172 18:21:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:13.172 18:21:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.172 18:21:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.172 18:21:31 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:17.458 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:17.458 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:17.459 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:19.990 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:22.519 18:21:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:22.519 18:21:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.519 18:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.519 18:21:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:22.519 18:21:40 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:22.519 18:21:40 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:22.519 18:21:40 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:22.519 18:21:40 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:22.519 18:21:40 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:22.519 18:21:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:22.519 18:21:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:22.519 18:21:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.519 18:21:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:22.519 18:21:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:22.519 18:21:40 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:22.519 18:21:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:06:22.519 18:21:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:22.519 18:21:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:06:22.519 18:21:40 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:22.519 18:21:40 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:22.519 18:21:40 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:22.519 18:21:40 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:1a:00.0 00:06:22.519 18:21:40 -- common/autotest_common.sh@1592 -- # [[ -z 0000:1a:00.0 ]] 00:06:22.519 18:21:40 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3796750 00:06:22.519 18:21:40 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.519 18:21:40 -- common/autotest_common.sh@1598 -- # waitforlisten 3796750 00:06:22.519 18:21:40 -- common/autotest_common.sh@829 -- # '[' -z 3796750 ']' 00:06:22.519 18:21:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.519 18:21:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.519 18:21:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.519 18:21:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.519 18:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.519 [2024-07-21 18:21:40.412637] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:22.519 [2024-07-21 18:21:40.412723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796750 ] 00:06:22.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.519 [2024-07-21 18:21:40.534564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.519 [2024-07-21 18:21:40.640477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.456 18:21:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.456 18:21:41 -- common/autotest_common.sh@862 -- # return 0 00:06:23.456 18:21:41 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:23.456 18:21:41 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:23.456 18:21:41 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:06:26.741 nvme0n1 00:06:26.741 18:21:44 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:26.741 [2024-07-21 18:21:44.726599] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:26.741 request: 00:06:26.741 { 00:06:26.741 "nvme_ctrlr_name": "nvme0", 00:06:26.741 "password": "test", 00:06:26.741 "method": "bdev_nvme_opal_revert", 00:06:26.741 "req_id": 1 00:06:26.741 } 00:06:26.741 Got JSON-RPC error response 00:06:26.741 response: 00:06:26.741 { 00:06:26.741 "code": -32602, 00:06:26.741 "message": "Invalid parameters" 00:06:26.741 } 00:06:26.741 18:21:44 -- common/autotest_common.sh@1604 -- # true 00:06:26.741 18:21:44 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:26.741 18:21:44 -- common/autotest_common.sh@1608 -- # killprocess 3796750 00:06:26.741 18:21:44 -- common/autotest_common.sh@948 -- # '[' -z 3796750 ']' 00:06:26.741 18:21:44 -- common/autotest_common.sh@952 -- # kill -0 3796750 00:06:26.741 18:21:44 -- common/autotest_common.sh@953 -- # uname 00:06:26.741 18:21:44 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.741 18:21:44 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796750 00:06:26.741 18:21:44 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.741 18:21:44 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.741 18:21:44 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796750' 00:06:26.741 killing process with pid 3796750 00:06:26.741 18:21:44 -- common/autotest_common.sh@967 -- # kill 3796750 00:06:26.741 18:21:44 -- common/autotest_common.sh@972 -- # wait 3796750 00:06:30.924 18:21:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:30.924 18:21:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:30.924 18:21:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:30.924 18:21:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:30.924 18:21:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:30.924 18:21:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.924 18:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 18:21:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:30.924 18:21:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:30.924 18:21:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.924 18:21:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.924 18:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 ************************************ 00:06:30.924 START TEST env 00:06:30.924 ************************************ 00:06:30.924 18:21:48 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:30.924 * Looking for test storage... 00:06:30.924 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:30.924 18:21:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:30.924 18:21:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.924 18:21:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.924 18:21:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 ************************************ 00:06:30.924 START TEST env_memory 00:06:30.924 ************************************ 00:06:30.924 18:21:48 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:30.924 00:06:30.924 00:06:30.924 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.924 http://cunit.sourceforge.net/ 00:06:30.924 00:06:30.924 00:06:30.924 Suite: memory 00:06:30.924 Test: alloc and free memory map ...[2024-07-21 18:21:48.970398] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:30.924 passed 00:06:30.924 Test: mem map translation ...[2024-07-21 18:21:48.990099] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:30.924 [2024-07-21 18:21:48.990124] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:30.924 [2024-07-21 18:21:48.990172] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:30.924 [2024-07-21 18:21:48.990185] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:30.924 passed 00:06:30.924 Test: mem map registration ...[2024-07-21 18:21:49.023669] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:30.924 [2024-07-21 18:21:49.023693] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:30.924 passed 00:06:30.924 Test: mem map adjacent registrations ...passed 00:06:30.924 00:06:30.924 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.924 suites 1 1 n/a 0 0 00:06:30.924 tests 4 4 4 0 0 00:06:30.924 asserts 152 152 152 0 n/a 00:06:30.924 00:06:30.924 Elapsed time = 0.120 seconds 00:06:30.924 00:06:30.924 real 0m0.135s 00:06:30.924 user 0m0.117s 00:06:30.925 sys 0m0.017s 00:06:30.925 18:21:49 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.925 18:21:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:30.925 ************************************ 00:06:30.925 END TEST env_memory 00:06:30.925 ************************************ 00:06:30.925 18:21:49 env -- common/autotest_common.sh@1142 -- # return 0 00:06:30.925 18:21:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:30.925 18:21:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.925 18:21:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.925 18:21:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.184 ************************************ 00:06:31.184 START TEST env_vtophys 00:06:31.184 ************************************ 00:06:31.184 18:21:49 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:31.184 EAL: lib.eal log level changed from notice to debug 00:06:31.184 EAL: Detected lcore 0 as core 0 on socket 0 00:06:31.184 EAL: Detected lcore 1 as core 1 on socket 0 00:06:31.184 EAL: Detected lcore 2 as core 2 on socket 0 00:06:31.184 EAL: Detected lcore 3 as core 3 on socket 0 00:06:31.184 EAL: Detected lcore 4 as core 4 on socket 0 00:06:31.184 EAL: Detected lcore 5 as core 8 on socket 0 00:06:31.184 EAL: Detected lcore 6 as core 9 on socket 0 00:06:31.184 EAL: Detected lcore 7 as core 10 on socket 0 00:06:31.184 EAL: Detected lcore 8 as core 11 on socket 0 00:06:31.184 EAL: Detected lcore 9 as core 16 on socket 0 00:06:31.184 EAL: Detected lcore 10 as core 17 on socket 0 00:06:31.184 EAL: Detected lcore 11 as core 18 on socket 0 00:06:31.184 EAL: Detected lcore 12 as core 19 on socket 0 00:06:31.184 EAL: Detected lcore 13 as core 20 on socket 0 00:06:31.184 EAL: Detected lcore 14 as core 24 on socket 0 00:06:31.184 EAL: Detected lcore 15 as core 25 on socket 0 00:06:31.184 EAL: Detected lcore 16 as core 26 on socket 0 00:06:31.184 EAL: Detected lcore 17 as core 27 on socket 0 00:06:31.184 EAL: Detected lcore 18 as core 0 on socket 1 00:06:31.184 EAL: Detected lcore 19 as core 1 on socket 1 00:06:31.184 EAL: Detected lcore 20 as core 2 on socket 1 00:06:31.184 EAL: Detected lcore 21 as core 3 on socket 1 00:06:31.184 EAL: Detected lcore 22 as core 4 on socket 1 00:06:31.184 EAL: Detected lcore 23 as core 8 on socket 1 00:06:31.184 EAL: Detected lcore 24 as core 9 on socket 1 00:06:31.184 EAL: Detected lcore 25 as core 10 on socket 1 00:06:31.184 EAL: Detected lcore 26 as core 11 on socket 1 00:06:31.184 EAL: Detected lcore 27 as core 16 on socket 1 00:06:31.184 EAL: Detected lcore 28 as core 17 on socket 1 00:06:31.184 EAL: Detected lcore 29 as core 18 on socket 1 00:06:31.184 EAL: Detected lcore 30 as core 19 on socket 1 00:06:31.184 EAL: Detected lcore 31 as core 20 on socket 1 00:06:31.184 EAL: Detected lcore 32 as core 24 on socket 1 00:06:31.184 EAL: Detected lcore 33 as core 25 on socket 1 00:06:31.184 EAL: Detected lcore 34 as core 26 on socket 1 00:06:31.184 EAL: Detected lcore 35 as core 27 on socket 1 00:06:31.184 EAL: Detected lcore 36 as core 0 on socket 0 00:06:31.184 EAL: Detected lcore 37 as core 1 on socket 0 00:06:31.184 EAL: Detected lcore 38 as core 2 on socket 0 00:06:31.184 EAL: Detected lcore 39 as core 3 on socket 0 00:06:31.184 EAL: Detected lcore 40 as core 4 on socket 0 00:06:31.184 EAL: Detected lcore 41 as core 8 on socket 0 00:06:31.184 EAL: Detected lcore 42 as core 9 on socket 0 00:06:31.184 EAL: Detected lcore 43 as core 10 on socket 0 00:06:31.184 EAL: Detected lcore 44 as core 11 on socket 0 00:06:31.184 EAL: Detected lcore 45 as core 16 on socket 0 00:06:31.184 EAL: Detected lcore 46 as core 17 on socket 0 00:06:31.184 EAL: Detected lcore 47 as core 18 on socket 0 00:06:31.184 EAL: Detected lcore 48 as core 19 on socket 0 00:06:31.184 EAL: Detected lcore 49 as core 20 on socket 0 00:06:31.184 EAL: Detected lcore 50 as core 24 on socket 0 00:06:31.184 EAL: Detected lcore 51 as core 25 on socket 0 00:06:31.184 EAL: Detected lcore 52 as core 26 on socket 0 00:06:31.184 EAL: Detected lcore 53 as core 27 on socket 0 00:06:31.184 EAL: Detected lcore 54 as core 0 on socket 1 00:06:31.184 EAL: Detected lcore 55 as core 1 on socket 1 00:06:31.184 EAL: Detected lcore 56 as core 2 on socket 1 00:06:31.184 EAL: Detected lcore 57 as core 3 on socket 1 00:06:31.184 EAL: Detected lcore 58 as core 4 on socket 1 00:06:31.184 EAL: Detected lcore 59 as core 8 on socket 1 00:06:31.184 EAL: Detected lcore 60 as core 9 on socket 1 00:06:31.184 EAL: Detected lcore 61 as core 10 on socket 1 00:06:31.184 EAL: Detected lcore 62 as core 11 on socket 1 00:06:31.184 EAL: Detected lcore 63 as core 16 on socket 1 00:06:31.184 EAL: Detected lcore 64 as core 17 on socket 1 00:06:31.184 EAL: Detected lcore 65 as core 18 on socket 1 00:06:31.184 EAL: Detected lcore 66 as core 19 on socket 1 00:06:31.184 EAL: Detected lcore 67 as core 20 on socket 1 00:06:31.184 EAL: Detected lcore 68 as core 24 on socket 1 00:06:31.184 EAL: Detected lcore 69 as core 25 on socket 1 00:06:31.184 EAL: Detected lcore 70 as core 26 on socket 1 00:06:31.184 EAL: Detected lcore 71 as core 27 on socket 1 00:06:31.184 EAL: Maximum logical cores by configuration: 128 00:06:31.184 EAL: Detected CPU lcores: 72 00:06:31.184 EAL: Detected NUMA nodes: 2 00:06:31.184 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:31.184 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:31.184 EAL: Checking presence of .so 'librte_eal.so' 00:06:31.184 EAL: Detected static linkage of DPDK 00:06:31.184 EAL: No shared files mode enabled, IPC will be disabled 00:06:31.184 EAL: Bus pci wants IOVA as 'DC' 00:06:31.184 EAL: Buses did not request a specific IOVA mode. 00:06:31.184 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:31.184 EAL: Selected IOVA mode 'VA' 00:06:31.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.184 EAL: Probing VFIO support... 00:06:31.184 EAL: IOMMU type 1 (Type 1) is supported 00:06:31.184 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:31.184 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:31.184 EAL: VFIO support initialized 00:06:31.184 EAL: Ask a virtual area of 0x2e000 bytes 00:06:31.184 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:31.184 EAL: Setting up physically contiguous memory... 00:06:31.184 EAL: Setting maximum number of open files to 524288 00:06:31.184 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:31.184 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:31.184 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:31.184 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.184 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:31.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.184 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.184 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:31.184 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:31.184 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.184 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:31.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:31.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.185 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:31.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:31.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.185 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:31.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:31.185 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:31.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.185 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:31.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:31.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.185 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:31.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:31.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.185 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:31.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:31.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.185 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:31.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.185 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:31.185 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:31.185 EAL: Hugepages will be freed exactly as allocated. 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: TSC frequency is ~2300000 KHz 00:06:31.185 EAL: Main lcore 0 is ready (tid=7ff362353a00;cpuset=[0]) 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 0 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 2MB 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Mem event callback 'spdk:(nil)' registered 00:06:31.185 00:06:31.185 00:06:31.185 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.185 http://cunit.sourceforge.net/ 00:06:31.185 00:06:31.185 00:06:31.185 Suite: components_suite 00:06:31.185 Test: vtophys_malloc_test ...passed 00:06:31.185 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 4MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was shrunk by 4MB 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 6MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was shrunk by 6MB 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 10MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was shrunk by 10MB 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 18MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was shrunk by 18MB 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 34MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was shrunk by 34MB 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 66MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was shrunk by 66MB 00:06:31.185 EAL: Trying to obtain current memory policy. 00:06:31.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.185 EAL: Restoring previous memory policy: 4 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.185 EAL: request: mp_malloc_sync 00:06:31.185 EAL: No shared files mode enabled, IPC is disabled 00:06:31.185 EAL: Heap on socket 0 was expanded by 130MB 00:06:31.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.444 EAL: request: mp_malloc_sync 00:06:31.444 EAL: No shared files mode enabled, IPC is disabled 00:06:31.444 EAL: Heap on socket 0 was shrunk by 130MB 00:06:31.444 EAL: Trying to obtain current memory policy. 00:06:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.444 EAL: Restoring previous memory policy: 4 00:06:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.444 EAL: request: mp_malloc_sync 00:06:31.444 EAL: No shared files mode enabled, IPC is disabled 00:06:31.444 EAL: Heap on socket 0 was expanded by 258MB 00:06:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.444 EAL: request: mp_malloc_sync 00:06:31.444 EAL: No shared files mode enabled, IPC is disabled 00:06:31.444 EAL: Heap on socket 0 was shrunk by 258MB 00:06:31.444 EAL: Trying to obtain current memory policy. 00:06:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.703 EAL: Restoring previous memory policy: 4 00:06:31.703 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.703 EAL: request: mp_malloc_sync 00:06:31.703 EAL: No shared files mode enabled, IPC is disabled 00:06:31.703 EAL: Heap on socket 0 was expanded by 514MB 00:06:31.703 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.703 EAL: request: mp_malloc_sync 00:06:31.703 EAL: No shared files mode enabled, IPC is disabled 00:06:31.703 EAL: Heap on socket 0 was shrunk by 514MB 00:06:31.703 EAL: Trying to obtain current memory policy. 00:06:31.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.962 EAL: Restoring previous memory policy: 4 00:06:31.962 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.962 EAL: request: mp_malloc_sync 00:06:31.962 EAL: No shared files mode enabled, IPC is disabled 00:06:31.962 EAL: Heap on socket 0 was expanded by 1026MB 00:06:32.222 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.481 EAL: request: mp_malloc_sync 00:06:32.481 EAL: No shared files mode enabled, IPC is disabled 00:06:32.481 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:32.481 passed 00:06:32.481 00:06:32.481 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.481 suites 1 1 n/a 0 0 00:06:32.481 tests 2 2 2 0 0 00:06:32.481 asserts 497 497 497 0 n/a 00:06:32.481 00:06:32.481 Elapsed time = 1.156 seconds 00:06:32.481 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.481 EAL: request: mp_malloc_sync 00:06:32.481 EAL: No shared files mode enabled, IPC is disabled 00:06:32.481 EAL: Heap on socket 0 was shrunk by 2MB 00:06:32.481 EAL: No shared files mode enabled, IPC is disabled 00:06:32.481 EAL: No shared files mode enabled, IPC is disabled 00:06:32.481 EAL: No shared files mode enabled, IPC is disabled 00:06:32.481 00:06:32.481 real 0m1.325s 00:06:32.481 user 0m0.745s 00:06:32.481 sys 0m0.547s 00:06:32.481 18:21:50 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.481 18:21:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:32.481 ************************************ 00:06:32.481 END TEST env_vtophys 00:06:32.481 ************************************ 00:06:32.481 18:21:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:32.481 18:21:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:32.481 18:21:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.481 18:21:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.481 18:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.481 ************************************ 00:06:32.481 START TEST env_pci 00:06:32.482 ************************************ 00:06:32.482 18:21:50 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:32.482 00:06:32.482 00:06:32.482 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.482 http://cunit.sourceforge.net/ 00:06:32.482 00:06:32.482 00:06:32.482 Suite: pci 00:06:32.482 Test: pci_hook ...[2024-07-21 18:21:50.574188] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3798148 has claimed it 00:06:32.482 EAL: Cannot find device (10000:00:01.0) 00:06:32.482 EAL: Failed to attach device on primary process 00:06:32.482 passed 00:06:32.482 00:06:32.482 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.482 suites 1 1 n/a 0 0 00:06:32.482 tests 1 1 1 0 0 00:06:32.482 asserts 25 25 25 0 n/a 00:06:32.482 00:06:32.482 Elapsed time = 0.049 seconds 00:06:32.482 00:06:32.482 real 0m0.069s 00:06:32.482 user 0m0.019s 00:06:32.482 sys 0m0.050s 00:06:32.482 18:21:50 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.482 18:21:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:32.482 ************************************ 00:06:32.482 END TEST env_pci 00:06:32.482 ************************************ 00:06:32.482 18:21:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:32.482 18:21:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:32.482 18:21:50 env -- env/env.sh@15 -- # uname 00:06:32.482 18:21:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:32.482 18:21:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:32.482 18:21:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.482 18:21:50 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:32.482 18:21:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.482 18:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.741 ************************************ 00:06:32.741 START TEST env_dpdk_post_init 00:06:32.741 ************************************ 00:06:32.741 18:21:50 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.741 EAL: Detected CPU lcores: 72 00:06:32.741 EAL: Detected NUMA nodes: 2 00:06:32.741 EAL: Detected static linkage of DPDK 00:06:32.741 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.741 EAL: Selected IOVA mode 'VA' 00:06:32.741 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.741 EAL: VFIO support initialized 00:06:32.741 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.741 EAL: Using IOMMU type 1 (Type 1) 00:06:33.679 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:38.959 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:38.959 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:39.219 Starting DPDK initialization... 00:06:39.219 Starting SPDK post initialization... 00:06:39.219 SPDK NVMe probe 00:06:39.219 Attaching to 0000:1a:00.0 00:06:39.219 Attached to 0000:1a:00.0 00:06:39.219 Cleaning up... 00:06:39.219 00:06:39.219 real 0m6.549s 00:06:39.219 user 0m4.749s 00:06:39.219 sys 0m1.049s 00:06:39.219 18:21:57 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.219 18:21:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.219 ************************************ 00:06:39.219 END TEST env_dpdk_post_init 00:06:39.219 ************************************ 00:06:39.219 18:21:57 env -- common/autotest_common.sh@1142 -- # return 0 00:06:39.219 18:21:57 env -- env/env.sh@26 -- # uname 00:06:39.219 18:21:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:39.219 18:21:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.219 18:21:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.219 18:21:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.219 18:21:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.219 ************************************ 00:06:39.219 START TEST env_mem_callbacks 00:06:39.219 ************************************ 00:06:39.219 18:21:57 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.219 EAL: Detected CPU lcores: 72 00:06:39.219 EAL: Detected NUMA nodes: 2 00:06:39.219 EAL: Detected static linkage of DPDK 00:06:39.219 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.219 EAL: Selected IOVA mode 'VA' 00:06:39.219 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.219 EAL: VFIO support initialized 00:06:39.219 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.219 00:06:39.219 00:06:39.219 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.219 http://cunit.sourceforge.net/ 00:06:39.219 00:06:39.219 00:06:39.219 Suite: memory 00:06:39.219 Test: test ... 00:06:39.219 register 0x200000200000 2097152 00:06:39.219 malloc 3145728 00:06:39.478 register 0x200000400000 4194304 00:06:39.478 buf 0x200000500000 len 3145728 PASSED 00:06:39.478 malloc 64 00:06:39.478 buf 0x2000004fff40 len 64 PASSED 00:06:39.478 malloc 4194304 00:06:39.478 register 0x200000800000 6291456 00:06:39.478 buf 0x200000a00000 len 4194304 PASSED 00:06:39.478 free 0x200000500000 3145728 00:06:39.478 free 0x2000004fff40 64 00:06:39.478 unregister 0x200000400000 4194304 PASSED 00:06:39.478 free 0x200000a00000 4194304 00:06:39.478 unregister 0x200000800000 6291456 PASSED 00:06:39.478 malloc 8388608 00:06:39.478 register 0x200000400000 10485760 00:06:39.478 buf 0x200000600000 len 8388608 PASSED 00:06:39.478 free 0x200000600000 8388608 00:06:39.478 unregister 0x200000400000 10485760 PASSED 00:06:39.478 passed 00:06:39.478 00:06:39.478 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.478 suites 1 1 n/a 0 0 00:06:39.478 tests 1 1 1 0 0 00:06:39.478 asserts 15 15 15 0 n/a 00:06:39.478 00:06:39.478 Elapsed time = 0.008 seconds 00:06:39.478 00:06:39.478 real 0m0.092s 00:06:39.478 user 0m0.024s 00:06:39.478 sys 0m0.067s 00:06:39.478 18:21:57 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.478 18:21:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 ************************************ 00:06:39.478 END TEST env_mem_callbacks 00:06:39.478 ************************************ 00:06:39.478 18:21:57 env -- common/autotest_common.sh@1142 -- # return 0 00:06:39.478 00:06:39.478 real 0m8.692s 00:06:39.478 user 0m5.845s 00:06:39.478 sys 0m2.098s 00:06:39.478 18:21:57 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.479 18:21:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.479 ************************************ 00:06:39.479 END TEST env 00:06:39.479 ************************************ 00:06:39.479 18:21:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.479 18:21:57 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:39.479 18:21:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.479 18:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.479 18:21:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.479 ************************************ 00:06:39.479 START TEST rpc 00:06:39.479 ************************************ 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:39.479 * Looking for test storage... 00:06:39.479 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:39.479 18:21:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3799212 00:06:39.479 18:21:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.479 18:21:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:39.479 18:21:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3799212 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@829 -- # '[' -z 3799212 ']' 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.479 18:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.738 [2024-07-21 18:21:57.704074] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:39.738 [2024-07-21 18:21:57.704152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799212 ] 00:06:39.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.738 [2024-07-21 18:21:57.826748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.738 [2024-07-21 18:21:57.928223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:39.738 [2024-07-21 18:21:57.928293] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3799212' to capture a snapshot of events at runtime. 00:06:39.738 [2024-07-21 18:21:57.928307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.738 [2024-07-21 18:21:57.928320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.738 [2024-07-21 18:21:57.928331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3799212 for offline analysis/debug. 00:06:39.738 [2024-07-21 18:21:57.928360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.673 18:21:58 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.673 18:21:58 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:40.673 18:21:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:40.673 18:21:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:40.673 18:21:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:40.673 18:21:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:40.673 18:21:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.673 18:21:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.673 18:21:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 ************************************ 00:06:40.673 START TEST rpc_integrity 00:06:40.673 ************************************ 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.673 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:40.673 { 00:06:40.673 "name": "Malloc0", 00:06:40.673 "aliases": [ 00:06:40.673 "c925ffa0-edb5-4ab0-8a85-5a8f67e118e8" 00:06:40.674 ], 00:06:40.674 "product_name": "Malloc disk", 00:06:40.674 "block_size": 512, 00:06:40.674 "num_blocks": 16384, 00:06:40.674 "uuid": "c925ffa0-edb5-4ab0-8a85-5a8f67e118e8", 00:06:40.674 "assigned_rate_limits": { 00:06:40.674 "rw_ios_per_sec": 0, 00:06:40.674 "rw_mbytes_per_sec": 0, 00:06:40.674 "r_mbytes_per_sec": 0, 00:06:40.674 "w_mbytes_per_sec": 0 00:06:40.674 }, 00:06:40.674 "claimed": false, 00:06:40.674 "zoned": false, 00:06:40.674 "supported_io_types": { 00:06:40.674 "read": true, 00:06:40.674 "write": true, 00:06:40.674 "unmap": true, 00:06:40.674 "flush": true, 00:06:40.674 "reset": true, 00:06:40.674 "nvme_admin": false, 00:06:40.674 "nvme_io": false, 00:06:40.674 "nvme_io_md": false, 00:06:40.674 "write_zeroes": true, 00:06:40.674 "zcopy": true, 00:06:40.674 "get_zone_info": false, 00:06:40.674 "zone_management": false, 00:06:40.674 "zone_append": false, 00:06:40.674 "compare": false, 00:06:40.674 "compare_and_write": false, 00:06:40.674 "abort": true, 00:06:40.674 "seek_hole": false, 00:06:40.674 "seek_data": false, 00:06:40.674 "copy": true, 00:06:40.674 "nvme_iov_md": false 00:06:40.674 }, 00:06:40.674 "memory_domains": [ 00:06:40.674 { 00:06:40.674 "dma_device_id": "system", 00:06:40.674 "dma_device_type": 1 00:06:40.674 }, 00:06:40.674 { 00:06:40.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.674 "dma_device_type": 2 00:06:40.674 } 00:06:40.674 ], 00:06:40.674 "driver_specific": {} 00:06:40.674 } 00:06:40.674 ]' 00:06:40.674 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:40.674 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:40.674 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:40.674 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.674 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.674 [2024-07-21 18:21:58.866981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:40.674 [2024-07-21 18:21:58.867026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.674 [2024-07-21 18:21:58.867056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6213690 00:06:40.674 [2024-07-21 18:21:58.867070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.674 [2024-07-21 18:21:58.868230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.674 [2024-07-21 18:21:58.868259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:40.674 Passthru0 00:06:40.674 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.674 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:40.674 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.674 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.933 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.933 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:40.933 { 00:06:40.933 "name": "Malloc0", 00:06:40.933 "aliases": [ 00:06:40.933 "c925ffa0-edb5-4ab0-8a85-5a8f67e118e8" 00:06:40.933 ], 00:06:40.933 "product_name": "Malloc disk", 00:06:40.933 "block_size": 512, 00:06:40.933 "num_blocks": 16384, 00:06:40.933 "uuid": "c925ffa0-edb5-4ab0-8a85-5a8f67e118e8", 00:06:40.933 "assigned_rate_limits": { 00:06:40.933 "rw_ios_per_sec": 0, 00:06:40.933 "rw_mbytes_per_sec": 0, 00:06:40.933 "r_mbytes_per_sec": 0, 00:06:40.933 "w_mbytes_per_sec": 0 00:06:40.933 }, 00:06:40.933 "claimed": true, 00:06:40.933 "claim_type": "exclusive_write", 00:06:40.933 "zoned": false, 00:06:40.933 "supported_io_types": { 00:06:40.933 "read": true, 00:06:40.933 "write": true, 00:06:40.933 "unmap": true, 00:06:40.933 "flush": true, 00:06:40.933 "reset": true, 00:06:40.933 "nvme_admin": false, 00:06:40.933 "nvme_io": false, 00:06:40.933 "nvme_io_md": false, 00:06:40.933 "write_zeroes": true, 00:06:40.933 "zcopy": true, 00:06:40.933 "get_zone_info": false, 00:06:40.933 "zone_management": false, 00:06:40.933 "zone_append": false, 00:06:40.933 "compare": false, 00:06:40.933 "compare_and_write": false, 00:06:40.933 "abort": true, 00:06:40.933 "seek_hole": false, 00:06:40.933 "seek_data": false, 00:06:40.933 "copy": true, 00:06:40.933 "nvme_iov_md": false 00:06:40.933 }, 00:06:40.933 "memory_domains": [ 00:06:40.933 { 00:06:40.933 "dma_device_id": "system", 00:06:40.933 "dma_device_type": 1 00:06:40.933 }, 00:06:40.933 { 00:06:40.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.933 "dma_device_type": 2 00:06:40.933 } 00:06:40.933 ], 00:06:40.933 "driver_specific": {} 00:06:40.933 }, 00:06:40.933 { 00:06:40.933 "name": "Passthru0", 00:06:40.933 "aliases": [ 00:06:40.933 "f0abd8ac-7cb2-5976-b6df-13511414631a" 00:06:40.933 ], 00:06:40.933 "product_name": "passthru", 00:06:40.933 "block_size": 512, 00:06:40.933 "num_blocks": 16384, 00:06:40.933 "uuid": "f0abd8ac-7cb2-5976-b6df-13511414631a", 00:06:40.933 "assigned_rate_limits": { 00:06:40.933 "rw_ios_per_sec": 0, 00:06:40.933 "rw_mbytes_per_sec": 0, 00:06:40.933 "r_mbytes_per_sec": 0, 00:06:40.933 "w_mbytes_per_sec": 0 00:06:40.933 }, 00:06:40.933 "claimed": false, 00:06:40.933 "zoned": false, 00:06:40.933 "supported_io_types": { 00:06:40.933 "read": true, 00:06:40.933 "write": true, 00:06:40.933 "unmap": true, 00:06:40.933 "flush": true, 00:06:40.933 "reset": true, 00:06:40.933 "nvme_admin": false, 00:06:40.933 "nvme_io": false, 00:06:40.933 "nvme_io_md": false, 00:06:40.933 "write_zeroes": true, 00:06:40.933 "zcopy": true, 00:06:40.933 "get_zone_info": false, 00:06:40.933 "zone_management": false, 00:06:40.933 "zone_append": false, 00:06:40.933 "compare": false, 00:06:40.933 "compare_and_write": false, 00:06:40.933 "abort": true, 00:06:40.933 "seek_hole": false, 00:06:40.933 "seek_data": false, 00:06:40.933 "copy": true, 00:06:40.933 "nvme_iov_md": false 00:06:40.933 }, 00:06:40.933 "memory_domains": [ 00:06:40.933 { 00:06:40.933 "dma_device_id": "system", 00:06:40.933 "dma_device_type": 1 00:06:40.933 }, 00:06:40.933 { 00:06:40.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.933 "dma_device_type": 2 00:06:40.933 } 00:06:40.933 ], 00:06:40.933 "driver_specific": { 00:06:40.933 "passthru": { 00:06:40.933 "name": "Passthru0", 00:06:40.933 "base_bdev_name": "Malloc0" 00:06:40.933 } 00:06:40.933 } 00:06:40.933 } 00:06:40.933 ]' 00:06:40.933 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:40.933 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:40.933 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:40.933 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.933 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.933 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.933 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:40.933 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.934 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.934 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.934 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:40.934 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.934 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.934 18:21:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.934 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:40.934 18:21:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:40.934 18:21:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:40.934 00:06:40.934 real 0m0.298s 00:06:40.934 user 0m0.181s 00:06:40.934 sys 0m0.058s 00:06:40.934 18:21:59 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.934 18:21:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.934 ************************************ 00:06:40.934 END TEST rpc_integrity 00:06:40.934 ************************************ 00:06:40.934 18:21:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:40.934 18:21:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:40.934 18:21:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.934 18:21:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.934 18:21:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.934 ************************************ 00:06:40.934 START TEST rpc_plugins 00:06:40.934 ************************************ 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:40.934 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.934 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:40.934 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.934 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.934 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:40.934 { 00:06:40.934 "name": "Malloc1", 00:06:40.934 "aliases": [ 00:06:40.934 "0e2e985f-cc62-42cb-a78f-8fb4c00f996d" 00:06:40.934 ], 00:06:40.934 "product_name": "Malloc disk", 00:06:40.934 "block_size": 4096, 00:06:40.934 "num_blocks": 256, 00:06:40.934 "uuid": "0e2e985f-cc62-42cb-a78f-8fb4c00f996d", 00:06:40.934 "assigned_rate_limits": { 00:06:40.934 "rw_ios_per_sec": 0, 00:06:40.934 "rw_mbytes_per_sec": 0, 00:06:40.934 "r_mbytes_per_sec": 0, 00:06:40.934 "w_mbytes_per_sec": 0 00:06:40.934 }, 00:06:40.934 "claimed": false, 00:06:40.934 "zoned": false, 00:06:40.934 "supported_io_types": { 00:06:40.934 "read": true, 00:06:40.934 "write": true, 00:06:40.934 "unmap": true, 00:06:40.934 "flush": true, 00:06:40.934 "reset": true, 00:06:40.934 "nvme_admin": false, 00:06:40.934 "nvme_io": false, 00:06:40.934 "nvme_io_md": false, 00:06:40.934 "write_zeroes": true, 00:06:40.934 "zcopy": true, 00:06:40.934 "get_zone_info": false, 00:06:40.934 "zone_management": false, 00:06:40.934 "zone_append": false, 00:06:40.934 "compare": false, 00:06:40.934 "compare_and_write": false, 00:06:40.934 "abort": true, 00:06:40.934 "seek_hole": false, 00:06:40.934 "seek_data": false, 00:06:40.934 "copy": true, 00:06:40.934 "nvme_iov_md": false 00:06:40.934 }, 00:06:40.934 "memory_domains": [ 00:06:40.934 { 00:06:40.934 "dma_device_id": "system", 00:06:40.934 "dma_device_type": 1 00:06:40.934 }, 00:06:40.934 { 00:06:40.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.934 "dma_device_type": 2 00:06:40.934 } 00:06:40.934 ], 00:06:40.934 "driver_specific": {} 00:06:40.934 } 00:06:40.934 ]' 00:06:40.934 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:41.192 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:41.192 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.192 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.192 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:41.192 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:41.192 18:21:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:41.192 00:06:41.192 real 0m0.155s 00:06:41.192 user 0m0.087s 00:06:41.192 sys 0m0.031s 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.192 18:21:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.192 ************************************ 00:06:41.192 END TEST rpc_plugins 00:06:41.192 ************************************ 00:06:41.192 18:21:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:41.192 18:21:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:41.192 18:21:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.192 18:21:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.192 18:21:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.192 ************************************ 00:06:41.192 START TEST rpc_trace_cmd_test 00:06:41.192 ************************************ 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:41.192 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3799212", 00:06:41.192 "tpoint_group_mask": "0x8", 00:06:41.192 "iscsi_conn": { 00:06:41.192 "mask": "0x2", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "scsi": { 00:06:41.192 "mask": "0x4", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "bdev": { 00:06:41.192 "mask": "0x8", 00:06:41.192 "tpoint_mask": "0xffffffffffffffff" 00:06:41.192 }, 00:06:41.192 "nvmf_rdma": { 00:06:41.192 "mask": "0x10", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "nvmf_tcp": { 00:06:41.192 "mask": "0x20", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "ftl": { 00:06:41.192 "mask": "0x40", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "blobfs": { 00:06:41.192 "mask": "0x80", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "dsa": { 00:06:41.192 "mask": "0x200", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "thread": { 00:06:41.192 "mask": "0x400", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "nvme_pcie": { 00:06:41.192 "mask": "0x800", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "iaa": { 00:06:41.192 "mask": "0x1000", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "nvme_tcp": { 00:06:41.192 "mask": "0x2000", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "bdev_nvme": { 00:06:41.192 "mask": "0x4000", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 }, 00:06:41.192 "sock": { 00:06:41.192 "mask": "0x8000", 00:06:41.192 "tpoint_mask": "0x0" 00:06:41.192 } 00:06:41.192 }' 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:41.192 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:41.450 18:21:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:41.450 00:06:41.450 real 0m0.245s 00:06:41.450 user 0m0.198s 00:06:41.450 sys 0m0.038s 00:06:41.451 18:21:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.451 18:21:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.451 ************************************ 00:06:41.451 END TEST rpc_trace_cmd_test 00:06:41.451 ************************************ 00:06:41.451 18:21:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:41.451 18:21:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:41.451 18:21:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:41.451 18:21:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:41.451 18:21:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.451 18:21:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.451 18:21:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.451 ************************************ 00:06:41.451 START TEST rpc_daemon_integrity 00:06:41.451 ************************************ 00:06:41.451 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:41.451 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:41.709 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:41.710 { 00:06:41.710 "name": "Malloc2", 00:06:41.710 "aliases": [ 00:06:41.710 "665df5a2-54c2-4400-9fec-eda8357cd84f" 00:06:41.710 ], 00:06:41.710 "product_name": "Malloc disk", 00:06:41.710 "block_size": 512, 00:06:41.710 "num_blocks": 16384, 00:06:41.710 "uuid": "665df5a2-54c2-4400-9fec-eda8357cd84f", 00:06:41.710 "assigned_rate_limits": { 00:06:41.710 "rw_ios_per_sec": 0, 00:06:41.710 "rw_mbytes_per_sec": 0, 00:06:41.710 "r_mbytes_per_sec": 0, 00:06:41.710 "w_mbytes_per_sec": 0 00:06:41.710 }, 00:06:41.710 "claimed": false, 00:06:41.710 "zoned": false, 00:06:41.710 "supported_io_types": { 00:06:41.710 "read": true, 00:06:41.710 "write": true, 00:06:41.710 "unmap": true, 00:06:41.710 "flush": true, 00:06:41.710 "reset": true, 00:06:41.710 "nvme_admin": false, 00:06:41.710 "nvme_io": false, 00:06:41.710 "nvme_io_md": false, 00:06:41.710 "write_zeroes": true, 00:06:41.710 "zcopy": true, 00:06:41.710 "get_zone_info": false, 00:06:41.710 "zone_management": false, 00:06:41.710 "zone_append": false, 00:06:41.710 "compare": false, 00:06:41.710 "compare_and_write": false, 00:06:41.710 "abort": true, 00:06:41.710 "seek_hole": false, 00:06:41.710 "seek_data": false, 00:06:41.710 "copy": true, 00:06:41.710 "nvme_iov_md": false 00:06:41.710 }, 00:06:41.710 "memory_domains": [ 00:06:41.710 { 00:06:41.710 "dma_device_id": "system", 00:06:41.710 "dma_device_type": 1 00:06:41.710 }, 00:06:41.710 { 00:06:41.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.710 "dma_device_type": 2 00:06:41.710 } 00:06:41.710 ], 00:06:41.710 "driver_specific": {} 00:06:41.710 } 00:06:41.710 ]' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 [2024-07-21 18:21:59.805659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:41.710 [2024-07-21 18:21:59.805704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.710 [2024-07-21 18:21:59.805726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x63a11b0 00:06:41.710 [2024-07-21 18:21:59.805741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.710 [2024-07-21 18:21:59.806744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.710 [2024-07-21 18:21:59.806773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:41.710 Passthru0 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.710 { 00:06:41.710 "name": "Malloc2", 00:06:41.710 "aliases": [ 00:06:41.710 "665df5a2-54c2-4400-9fec-eda8357cd84f" 00:06:41.710 ], 00:06:41.710 "product_name": "Malloc disk", 00:06:41.710 "block_size": 512, 00:06:41.710 "num_blocks": 16384, 00:06:41.710 "uuid": "665df5a2-54c2-4400-9fec-eda8357cd84f", 00:06:41.710 "assigned_rate_limits": { 00:06:41.710 "rw_ios_per_sec": 0, 00:06:41.710 "rw_mbytes_per_sec": 0, 00:06:41.710 "r_mbytes_per_sec": 0, 00:06:41.710 "w_mbytes_per_sec": 0 00:06:41.710 }, 00:06:41.710 "claimed": true, 00:06:41.710 "claim_type": "exclusive_write", 00:06:41.710 "zoned": false, 00:06:41.710 "supported_io_types": { 00:06:41.710 "read": true, 00:06:41.710 "write": true, 00:06:41.710 "unmap": true, 00:06:41.710 "flush": true, 00:06:41.710 "reset": true, 00:06:41.710 "nvme_admin": false, 00:06:41.710 "nvme_io": false, 00:06:41.710 "nvme_io_md": false, 00:06:41.710 "write_zeroes": true, 00:06:41.710 "zcopy": true, 00:06:41.710 "get_zone_info": false, 00:06:41.710 "zone_management": false, 00:06:41.710 "zone_append": false, 00:06:41.710 "compare": false, 00:06:41.710 "compare_and_write": false, 00:06:41.710 "abort": true, 00:06:41.710 "seek_hole": false, 00:06:41.710 "seek_data": false, 00:06:41.710 "copy": true, 00:06:41.710 "nvme_iov_md": false 00:06:41.710 }, 00:06:41.710 "memory_domains": [ 00:06:41.710 { 00:06:41.710 "dma_device_id": "system", 00:06:41.710 "dma_device_type": 1 00:06:41.710 }, 00:06:41.710 { 00:06:41.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.710 "dma_device_type": 2 00:06:41.710 } 00:06:41.710 ], 00:06:41.710 "driver_specific": {} 00:06:41.710 }, 00:06:41.710 { 00:06:41.710 "name": "Passthru0", 00:06:41.710 "aliases": [ 00:06:41.710 "c2beccb9-8709-5123-89e4-5247006965a9" 00:06:41.710 ], 00:06:41.710 "product_name": "passthru", 00:06:41.710 "block_size": 512, 00:06:41.710 "num_blocks": 16384, 00:06:41.710 "uuid": "c2beccb9-8709-5123-89e4-5247006965a9", 00:06:41.710 "assigned_rate_limits": { 00:06:41.710 "rw_ios_per_sec": 0, 00:06:41.710 "rw_mbytes_per_sec": 0, 00:06:41.710 "r_mbytes_per_sec": 0, 00:06:41.710 "w_mbytes_per_sec": 0 00:06:41.710 }, 00:06:41.710 "claimed": false, 00:06:41.710 "zoned": false, 00:06:41.710 "supported_io_types": { 00:06:41.710 "read": true, 00:06:41.710 "write": true, 00:06:41.710 "unmap": true, 00:06:41.710 "flush": true, 00:06:41.710 "reset": true, 00:06:41.710 "nvme_admin": false, 00:06:41.710 "nvme_io": false, 00:06:41.710 "nvme_io_md": false, 00:06:41.710 "write_zeroes": true, 00:06:41.710 "zcopy": true, 00:06:41.710 "get_zone_info": false, 00:06:41.710 "zone_management": false, 00:06:41.710 "zone_append": false, 00:06:41.710 "compare": false, 00:06:41.710 "compare_and_write": false, 00:06:41.710 "abort": true, 00:06:41.710 "seek_hole": false, 00:06:41.710 "seek_data": false, 00:06:41.710 "copy": true, 00:06:41.710 "nvme_iov_md": false 00:06:41.710 }, 00:06:41.710 "memory_domains": [ 00:06:41.710 { 00:06:41.710 "dma_device_id": "system", 00:06:41.710 "dma_device_type": 1 00:06:41.710 }, 00:06:41.710 { 00:06:41.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.710 "dma_device_type": 2 00:06:41.710 } 00:06:41.710 ], 00:06:41.710 "driver_specific": { 00:06:41.710 "passthru": { 00:06:41.710 "name": "Passthru0", 00:06:41.710 "base_bdev_name": "Malloc2" 00:06:41.710 } 00:06:41.710 } 00:06:41.710 } 00:06:41.710 ]' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.710 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:41.969 18:21:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.969 00:06:41.969 real 0m0.303s 00:06:41.969 user 0m0.190s 00:06:41.969 sys 0m0.054s 00:06:41.969 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.969 18:21:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.969 ************************************ 00:06:41.969 END TEST rpc_daemon_integrity 00:06:41.969 ************************************ 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:41.969 18:22:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:41.969 18:22:00 rpc -- rpc/rpc.sh@84 -- # killprocess 3799212 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@948 -- # '[' -z 3799212 ']' 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@952 -- # kill -0 3799212 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@953 -- # uname 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799212 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799212' 00:06:41.969 killing process with pid 3799212 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@967 -- # kill 3799212 00:06:41.969 18:22:00 rpc -- common/autotest_common.sh@972 -- # wait 3799212 00:06:42.228 00:06:42.228 real 0m2.841s 00:06:42.228 user 0m3.616s 00:06:42.228 sys 0m0.930s 00:06:42.228 18:22:00 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.228 18:22:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.228 ************************************ 00:06:42.228 END TEST rpc 00:06:42.228 ************************************ 00:06:42.486 18:22:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.486 18:22:00 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:42.486 18:22:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.486 18:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.486 18:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:42.486 ************************************ 00:06:42.486 START TEST skip_rpc 00:06:42.486 ************************************ 00:06:42.486 18:22:00 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:42.486 * Looking for test storage... 00:06:42.486 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:42.486 18:22:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:42.486 18:22:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:42.486 18:22:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:42.486 18:22:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.486 18:22:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.486 18:22:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.486 ************************************ 00:06:42.486 START TEST skip_rpc 00:06:42.486 ************************************ 00:06:42.486 18:22:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:42.486 18:22:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:42.486 18:22:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3799740 00:06:42.486 18:22:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.486 18:22:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:42.486 [2024-07-21 18:22:00.654453] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:42.486 [2024-07-21 18:22:00.654503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799740 ] 00:06:42.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.745 [2024-07-21 18:22:00.756589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.745 [2024-07-21 18:22:00.856774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3799740 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3799740 ']' 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3799740 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799740 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799740' 00:06:48.009 killing process with pid 3799740 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3799740 00:06:48.009 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3799740 00:06:48.009 00:06:48.009 real 0m5.403s 00:06:48.009 user 0m5.107s 00:06:48.009 sys 0m0.320s 00:06:48.009 18:22:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.009 18:22:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 ************************************ 00:06:48.009 END TEST skip_rpc 00:06:48.009 ************************************ 00:06:48.009 18:22:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:48.009 18:22:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:48.009 18:22:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.009 18:22:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.009 18:22:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 ************************************ 00:06:48.009 START TEST skip_rpc_with_json 00:06:48.009 ************************************ 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3800476 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3800476 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3800476 ']' 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.009 18:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 [2024-07-21 18:22:06.155389] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:48.009 [2024-07-21 18:22:06.155464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800476 ] 00:06:48.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.268 [2024-07-21 18:22:06.276205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.268 [2024-07-21 18:22:06.378134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.203 [2024-07-21 18:22:07.137381] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:49.203 request: 00:06:49.203 { 00:06:49.203 "trtype": "tcp", 00:06:49.203 "method": "nvmf_get_transports", 00:06:49.203 "req_id": 1 00:06:49.203 } 00:06:49.203 Got JSON-RPC error response 00:06:49.203 response: 00:06:49.203 { 00:06:49.203 "code": -19, 00:06:49.203 "message": "No such device" 00:06:49.203 } 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:49.203 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.204 [2024-07-21 18:22:07.149503] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.204 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:49.204 { 00:06:49.204 "subsystems": [ 00:06:49.204 { 00:06:49.204 "subsystem": "scheduler", 00:06:49.204 "config": [ 00:06:49.204 { 00:06:49.204 "method": "framework_set_scheduler", 00:06:49.204 "params": { 00:06:49.204 "name": "static" 00:06:49.204 } 00:06:49.204 } 00:06:49.204 ] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "vmd", 00:06:49.204 "config": [] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "sock", 00:06:49.204 "config": [ 00:06:49.204 { 00:06:49.204 "method": "sock_set_default_impl", 00:06:49.204 "params": { 00:06:49.204 "impl_name": "posix" 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "sock_impl_set_options", 00:06:49.204 "params": { 00:06:49.204 "impl_name": "ssl", 00:06:49.204 "recv_buf_size": 4096, 00:06:49.204 "send_buf_size": 4096, 00:06:49.204 "enable_recv_pipe": true, 00:06:49.204 "enable_quickack": false, 00:06:49.204 "enable_placement_id": 0, 00:06:49.204 "enable_zerocopy_send_server": true, 00:06:49.204 "enable_zerocopy_send_client": false, 00:06:49.204 "zerocopy_threshold": 0, 00:06:49.204 "tls_version": 0, 00:06:49.204 "enable_ktls": false 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "sock_impl_set_options", 00:06:49.204 "params": { 00:06:49.204 "impl_name": "posix", 00:06:49.204 "recv_buf_size": 2097152, 00:06:49.204 "send_buf_size": 2097152, 00:06:49.204 "enable_recv_pipe": true, 00:06:49.204 "enable_quickack": false, 00:06:49.204 "enable_placement_id": 0, 00:06:49.204 "enable_zerocopy_send_server": true, 00:06:49.204 "enable_zerocopy_send_client": false, 00:06:49.204 "zerocopy_threshold": 0, 00:06:49.204 "tls_version": 0, 00:06:49.204 "enable_ktls": false 00:06:49.204 } 00:06:49.204 } 00:06:49.204 ] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "iobuf", 00:06:49.204 "config": [ 00:06:49.204 { 00:06:49.204 "method": "iobuf_set_options", 00:06:49.204 "params": { 00:06:49.204 "small_pool_count": 8192, 00:06:49.204 "large_pool_count": 1024, 00:06:49.204 "small_bufsize": 8192, 00:06:49.204 "large_bufsize": 135168 00:06:49.204 } 00:06:49.204 } 00:06:49.204 ] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "keyring", 00:06:49.204 "config": [] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "vfio_user_target", 00:06:49.204 "config": null 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "accel", 00:06:49.204 "config": [ 00:06:49.204 { 00:06:49.204 "method": "accel_set_options", 00:06:49.204 "params": { 00:06:49.204 "small_cache_size": 128, 00:06:49.204 "large_cache_size": 16, 00:06:49.204 "task_count": 2048, 00:06:49.204 "sequence_count": 2048, 00:06:49.204 "buf_count": 2048 00:06:49.204 } 00:06:49.204 } 00:06:49.204 ] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "bdev", 00:06:49.204 "config": [ 00:06:49.204 { 00:06:49.204 "method": "bdev_set_options", 00:06:49.204 "params": { 00:06:49.204 "bdev_io_pool_size": 65535, 00:06:49.204 "bdev_io_cache_size": 256, 00:06:49.204 "bdev_auto_examine": true, 00:06:49.204 "iobuf_small_cache_size": 128, 00:06:49.204 "iobuf_large_cache_size": 16 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "bdev_raid_set_options", 00:06:49.204 "params": { 00:06:49.204 "process_window_size_kb": 1024, 00:06:49.204 "process_max_bandwidth_mb_sec": 0 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "bdev_nvme_set_options", 00:06:49.204 "params": { 00:06:49.204 "action_on_timeout": "none", 00:06:49.204 "timeout_us": 0, 00:06:49.204 "timeout_admin_us": 0, 00:06:49.204 "keep_alive_timeout_ms": 10000, 00:06:49.204 "arbitration_burst": 0, 00:06:49.204 "low_priority_weight": 0, 00:06:49.204 "medium_priority_weight": 0, 00:06:49.204 "high_priority_weight": 0, 00:06:49.204 "nvme_adminq_poll_period_us": 10000, 00:06:49.204 "nvme_ioq_poll_period_us": 0, 00:06:49.204 "io_queue_requests": 0, 00:06:49.204 "delay_cmd_submit": true, 00:06:49.204 "transport_retry_count": 4, 00:06:49.204 "bdev_retry_count": 3, 00:06:49.204 "transport_ack_timeout": 0, 00:06:49.204 "ctrlr_loss_timeout_sec": 0, 00:06:49.204 "reconnect_delay_sec": 0, 00:06:49.204 "fast_io_fail_timeout_sec": 0, 00:06:49.204 "disable_auto_failback": false, 00:06:49.204 "generate_uuids": false, 00:06:49.204 "transport_tos": 0, 00:06:49.204 "nvme_error_stat": false, 00:06:49.204 "rdma_srq_size": 0, 00:06:49.204 "io_path_stat": false, 00:06:49.204 "allow_accel_sequence": false, 00:06:49.204 "rdma_max_cq_size": 0, 00:06:49.204 "rdma_cm_event_timeout_ms": 0, 00:06:49.204 "dhchap_digests": [ 00:06:49.204 "sha256", 00:06:49.204 "sha384", 00:06:49.204 "sha512" 00:06:49.204 ], 00:06:49.204 "dhchap_dhgroups": [ 00:06:49.204 "null", 00:06:49.204 "ffdhe2048", 00:06:49.204 "ffdhe3072", 00:06:49.204 "ffdhe4096", 00:06:49.204 "ffdhe6144", 00:06:49.204 "ffdhe8192" 00:06:49.204 ] 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "bdev_nvme_set_hotplug", 00:06:49.204 "params": { 00:06:49.204 "period_us": 100000, 00:06:49.204 "enable": false 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "bdev_iscsi_set_options", 00:06:49.204 "params": { 00:06:49.204 "timeout_sec": 30 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "bdev_wait_for_examine" 00:06:49.204 } 00:06:49.204 ] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "nvmf", 00:06:49.204 "config": [ 00:06:49.204 { 00:06:49.204 "method": "nvmf_set_config", 00:06:49.204 "params": { 00:06:49.204 "discovery_filter": "match_any", 00:06:49.204 "admin_cmd_passthru": { 00:06:49.204 "identify_ctrlr": false 00:06:49.204 } 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "nvmf_set_max_subsystems", 00:06:49.204 "params": { 00:06:49.204 "max_subsystems": 1024 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "nvmf_set_crdt", 00:06:49.204 "params": { 00:06:49.204 "crdt1": 0, 00:06:49.204 "crdt2": 0, 00:06:49.204 "crdt3": 0 00:06:49.204 } 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "method": "nvmf_create_transport", 00:06:49.204 "params": { 00:06:49.204 "trtype": "TCP", 00:06:49.204 "max_queue_depth": 128, 00:06:49.204 "max_io_qpairs_per_ctrlr": 127, 00:06:49.204 "in_capsule_data_size": 4096, 00:06:49.204 "max_io_size": 131072, 00:06:49.204 "io_unit_size": 131072, 00:06:49.204 "max_aq_depth": 128, 00:06:49.204 "num_shared_buffers": 511, 00:06:49.204 "buf_cache_size": 4294967295, 00:06:49.204 "dif_insert_or_strip": false, 00:06:49.204 "zcopy": false, 00:06:49.204 "c2h_success": true, 00:06:49.204 "sock_priority": 0, 00:06:49.204 "abort_timeout_sec": 1, 00:06:49.204 "ack_timeout": 0, 00:06:49.204 "data_wr_pool_size": 0 00:06:49.204 } 00:06:49.204 } 00:06:49.204 ] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "nbd", 00:06:49.204 "config": [] 00:06:49.204 }, 00:06:49.204 { 00:06:49.204 "subsystem": "ublk", 00:06:49.204 "config": [] 00:06:49.204 }, 00:06:49.205 { 00:06:49.205 "subsystem": "vhost_blk", 00:06:49.205 "config": [] 00:06:49.205 }, 00:06:49.205 { 00:06:49.205 "subsystem": "scsi", 00:06:49.205 "config": null 00:06:49.205 }, 00:06:49.205 { 00:06:49.205 "subsystem": "iscsi", 00:06:49.205 "config": [ 00:06:49.205 { 00:06:49.205 "method": "iscsi_set_options", 00:06:49.205 "params": { 00:06:49.205 "node_base": "iqn.2016-06.io.spdk", 00:06:49.205 "max_sessions": 128, 00:06:49.205 "max_connections_per_session": 2, 00:06:49.205 "max_queue_depth": 64, 00:06:49.205 "default_time2wait": 2, 00:06:49.205 "default_time2retain": 20, 00:06:49.205 "first_burst_length": 8192, 00:06:49.205 "immediate_data": true, 00:06:49.205 "allow_duplicated_isid": false, 00:06:49.205 "error_recovery_level": 0, 00:06:49.205 "nop_timeout": 60, 00:06:49.205 "nop_in_interval": 30, 00:06:49.205 "disable_chap": false, 00:06:49.205 "require_chap": false, 00:06:49.205 "mutual_chap": false, 00:06:49.205 "chap_group": 0, 00:06:49.205 "max_large_datain_per_connection": 64, 00:06:49.205 "max_r2t_per_connection": 4, 00:06:49.205 "pdu_pool_size": 36864, 00:06:49.205 "immediate_data_pool_size": 16384, 00:06:49.205 "data_out_pool_size": 2048 00:06:49.205 } 00:06:49.205 } 00:06:49.205 ] 00:06:49.205 }, 00:06:49.205 { 00:06:49.205 "subsystem": "vhost_scsi", 00:06:49.205 "config": [] 00:06:49.205 } 00:06:49.205 ] 00:06:49.205 } 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3800476 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3800476 ']' 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3800476 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3800476 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3800476' 00:06:49.205 killing process with pid 3800476 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3800476 00:06:49.205 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3800476 00:06:49.771 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3800663 00:06:49.771 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:49.771 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3800663 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3800663 ']' 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3800663 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3800663 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3800663' 00:06:55.035 killing process with pid 3800663 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3800663 00:06:55.035 18:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3800663 00:06:55.035 18:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:55.035 18:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:55.035 00:06:55.035 real 0m7.056s 00:06:55.035 user 0m6.803s 00:06:55.035 sys 0m0.844s 00:06:55.035 18:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.035 18:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.035 ************************************ 00:06:55.035 END TEST skip_rpc_with_json 00:06:55.035 ************************************ 00:06:55.035 18:22:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:55.035 18:22:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:55.035 18:22:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.035 18:22:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.035 18:22:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 ************************************ 00:06:55.294 START TEST skip_rpc_with_delay 00:06:55.294 ************************************ 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:55.294 [2024-07-21 18:22:13.295981] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:55.294 [2024-07-21 18:22:13.296127] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.294 00:06:55.294 real 0m0.047s 00:06:55.294 user 0m0.027s 00:06:55.294 sys 0m0.019s 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.294 18:22:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 ************************************ 00:06:55.294 END TEST skip_rpc_with_delay 00:06:55.294 ************************************ 00:06:55.294 18:22:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:55.294 18:22:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:55.294 18:22:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:55.294 18:22:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:55.294 18:22:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.294 18:22:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.294 18:22:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 ************************************ 00:06:55.294 START TEST exit_on_failed_rpc_init 00:06:55.294 ************************************ 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3801482 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3801482 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3801482 ']' 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.294 18:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 [2024-07-21 18:22:13.422149] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:55.294 [2024-07-21 18:22:13.422207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801482 ] 00:06:55.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.553 [2024-07-21 18:22:13.542624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.553 [2024-07-21 18:22:13.649104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:56.489 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:56.489 [2024-07-21 18:22:14.420490] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:56.489 [2024-07-21 18:22:14.420566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801597 ] 00:06:56.489 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.489 [2024-07-21 18:22:14.530450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.489 [2024-07-21 18:22:14.626119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.489 [2024-07-21 18:22:14.626230] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:56.489 [2024-07-21 18:22:14.626249] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:56.489 [2024-07-21 18:22:14.626260] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3801482 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3801482 ']' 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3801482 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801482 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801482' 00:06:56.748 killing process with pid 3801482 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3801482 00:06:56.748 18:22:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3801482 00:06:57.006 00:06:57.006 real 0m1.730s 00:06:57.006 user 0m1.981s 00:06:57.006 sys 0m0.577s 00:06:57.006 18:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.006 18:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:57.006 ************************************ 00:06:57.006 END TEST exit_on_failed_rpc_init 00:06:57.006 ************************************ 00:06:57.006 18:22:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:57.006 18:22:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:57.006 00:06:57.006 real 0m14.677s 00:06:57.006 user 0m14.078s 00:06:57.006 sys 0m2.076s 00:06:57.007 18:22:15 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.007 18:22:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.007 ************************************ 00:06:57.007 END TEST skip_rpc 00:06:57.007 ************************************ 00:06:57.007 18:22:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.007 18:22:15 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:57.007 18:22:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.007 18:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.007 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.265 ************************************ 00:06:57.265 START TEST rpc_client 00:06:57.265 ************************************ 00:06:57.265 18:22:15 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:57.265 * Looking for test storage... 00:06:57.265 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:57.265 18:22:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:57.265 OK 00:06:57.265 18:22:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:57.265 00:06:57.265 real 0m0.125s 00:06:57.265 user 0m0.051s 00:06:57.265 sys 0m0.085s 00:06:57.265 18:22:15 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.265 18:22:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:57.265 ************************************ 00:06:57.265 END TEST rpc_client 00:06:57.265 ************************************ 00:06:57.265 18:22:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.265 18:22:15 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:57.265 18:22:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.265 18:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.265 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.266 ************************************ 00:06:57.266 START TEST json_config 00:06:57.266 ************************************ 00:06:57.266 18:22:15 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:57.525 18:22:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.525 18:22:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.525 18:22:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.525 18:22:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.525 18:22:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.525 18:22:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.525 18:22:15 json_config -- paths/export.sh@5 -- # export PATH 00:06:57.525 18:22:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@47 -- # : 0 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.525 18:22:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:57.525 WARNING: No tests are enabled so not running JSON configuration tests 00:06:57.525 18:22:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:57.525 00:06:57.525 real 0m0.113s 00:06:57.525 user 0m0.055s 00:06:57.525 sys 0m0.059s 00:06:57.525 18:22:15 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.525 18:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.525 ************************************ 00:06:57.525 END TEST json_config 00:06:57.525 ************************************ 00:06:57.525 18:22:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.525 18:22:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:57.525 18:22:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.525 18:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.525 18:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.525 ************************************ 00:06:57.525 START TEST json_config_extra_key 00:06:57.525 ************************************ 00:06:57.525 18:22:15 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:57.525 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.525 18:22:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:57.784 18:22:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.784 18:22:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.784 18:22:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.784 18:22:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.784 18:22:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.784 18:22:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.784 18:22:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:57.784 18:22:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.784 18:22:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.785 18:22:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.785 18:22:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.785 18:22:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.785 18:22:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:57.785 INFO: launching applications... 00:06:57.785 18:22:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3801917 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:57.785 Waiting for target to run... 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3801917 /var/tmp/spdk_tgt.sock 00:06:57.785 18:22:15 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3801917 ']' 00:06:57.785 18:22:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:57.785 18:22:15 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:57.785 18:22:15 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.785 18:22:15 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:57.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:57.785 18:22:15 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.785 18:22:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:57.785 [2024-07-21 18:22:15.791234] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:57.785 [2024-07-21 18:22:15.791313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801917 ] 00:06:57.785 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.042 [2024-07-21 18:22:16.149265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.042 [2024-07-21 18:22:16.237109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.608 18:22:16 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.608 18:22:16 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:58.608 00:06:58.608 18:22:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:58.608 INFO: shutting down applications... 00:06:58.608 18:22:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3801917 ]] 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3801917 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3801917 00:06:58.608 18:22:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3801917 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:59.176 18:22:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:59.176 SPDK target shutdown done 00:06:59.176 18:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:59.176 Success 00:06:59.176 00:06:59.176 real 0m1.597s 00:06:59.176 user 0m1.451s 00:06:59.176 sys 0m0.485s 00:06:59.176 18:22:17 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.176 18:22:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:59.176 ************************************ 00:06:59.176 END TEST json_config_extra_key 00:06:59.176 ************************************ 00:06:59.176 18:22:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.176 18:22:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:59.176 18:22:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.176 18:22:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.176 18:22:17 -- common/autotest_common.sh@10 -- # set +x 00:06:59.176 ************************************ 00:06:59.176 START TEST alias_rpc 00:06:59.176 ************************************ 00:06:59.176 18:22:17 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:59.434 * Looking for test storage... 00:06:59.434 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:59.434 18:22:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:59.434 18:22:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3802216 00:06:59.434 18:22:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3802216 00:06:59.434 18:22:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.434 18:22:17 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3802216 ']' 00:06:59.434 18:22:17 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.434 18:22:17 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.434 18:22:17 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.434 18:22:17 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.434 18:22:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.434 [2024-07-21 18:22:17.490063] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:59.434 [2024-07-21 18:22:17.490160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802216 ] 00:06:59.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.434 [2024-07-21 18:22:17.612116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.693 [2024-07-21 18:22:17.711619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.261 18:22:18 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.261 18:22:18 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.261 18:22:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:00.828 18:22:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3802216 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3802216 ']' 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3802216 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802216 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802216' 00:07:00.828 killing process with pid 3802216 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@967 -- # kill 3802216 00:07:00.828 18:22:18 alias_rpc -- common/autotest_common.sh@972 -- # wait 3802216 00:07:01.087 00:07:01.087 real 0m1.813s 00:07:01.087 user 0m2.025s 00:07:01.087 sys 0m0.565s 00:07:01.087 18:22:19 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.087 18:22:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.087 ************************************ 00:07:01.087 END TEST alias_rpc 00:07:01.087 ************************************ 00:07:01.087 18:22:19 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.087 18:22:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:01.087 18:22:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:01.087 18:22:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.087 18:22:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.087 18:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:01.087 ************************************ 00:07:01.087 START TEST spdkcli_tcp 00:07:01.087 ************************************ 00:07:01.087 18:22:19 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:01.444 * Looking for test storage... 00:07:01.444 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3802545 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:01.444 18:22:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3802545 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3802545 ']' 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.444 18:22:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.444 [2024-07-21 18:22:19.387597] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:01.444 [2024-07-21 18:22:19.387695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802545 ] 00:07:01.444 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.444 [2024-07-21 18:22:19.509405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.444 [2024-07-21 18:22:19.612876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.445 [2024-07-21 18:22:19.612882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.390 18:22:20 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.390 18:22:20 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:07:02.390 18:22:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3802725 00:07:02.390 18:22:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:02.390 18:22:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:02.390 [ 00:07:02.390 "spdk_get_version", 00:07:02.390 "rpc_get_methods", 00:07:02.390 "trace_get_info", 00:07:02.390 "trace_get_tpoint_group_mask", 00:07:02.390 "trace_disable_tpoint_group", 00:07:02.390 "trace_enable_tpoint_group", 00:07:02.390 "trace_clear_tpoint_mask", 00:07:02.390 "trace_set_tpoint_mask", 00:07:02.391 "vfu_tgt_set_base_path", 00:07:02.391 "framework_get_pci_devices", 00:07:02.391 "framework_get_config", 00:07:02.391 "framework_get_subsystems", 00:07:02.391 "keyring_get_keys", 00:07:02.391 "iobuf_get_stats", 00:07:02.391 "iobuf_set_options", 00:07:02.391 "sock_get_default_impl", 00:07:02.391 "sock_set_default_impl", 00:07:02.391 "sock_impl_set_options", 00:07:02.391 "sock_impl_get_options", 00:07:02.391 "vmd_rescan", 00:07:02.391 "vmd_remove_device", 00:07:02.391 "vmd_enable", 00:07:02.391 "accel_get_stats", 00:07:02.391 "accel_set_options", 00:07:02.391 "accel_set_driver", 00:07:02.391 "accel_crypto_key_destroy", 00:07:02.391 "accel_crypto_keys_get", 00:07:02.391 "accel_crypto_key_create", 00:07:02.391 "accel_assign_opc", 00:07:02.391 "accel_get_module_info", 00:07:02.391 "accel_get_opc_assignments", 00:07:02.391 "notify_get_notifications", 00:07:02.391 "notify_get_types", 00:07:02.391 "bdev_get_histogram", 00:07:02.391 "bdev_enable_histogram", 00:07:02.391 "bdev_set_qos_limit", 00:07:02.391 "bdev_set_qd_sampling_period", 00:07:02.391 "bdev_get_bdevs", 00:07:02.391 "bdev_reset_iostat", 00:07:02.391 "bdev_get_iostat", 00:07:02.391 "bdev_examine", 00:07:02.391 "bdev_wait_for_examine", 00:07:02.391 "bdev_set_options", 00:07:02.391 "scsi_get_devices", 00:07:02.391 "thread_set_cpumask", 00:07:02.391 "framework_get_governor", 00:07:02.391 "framework_get_scheduler", 00:07:02.391 "framework_set_scheduler", 00:07:02.391 "framework_get_reactors", 00:07:02.391 "thread_get_io_channels", 00:07:02.391 "thread_get_pollers", 00:07:02.391 "thread_get_stats", 00:07:02.391 "framework_monitor_context_switch", 00:07:02.391 "spdk_kill_instance", 00:07:02.391 "log_enable_timestamps", 00:07:02.391 "log_get_flags", 00:07:02.391 "log_clear_flag", 00:07:02.391 "log_set_flag", 00:07:02.391 "log_get_level", 00:07:02.391 "log_set_level", 00:07:02.391 "log_get_print_level", 00:07:02.391 "log_set_print_level", 00:07:02.391 "framework_enable_cpumask_locks", 00:07:02.391 "framework_disable_cpumask_locks", 00:07:02.391 "framework_wait_init", 00:07:02.391 "framework_start_init", 00:07:02.391 "virtio_blk_create_transport", 00:07:02.391 "virtio_blk_get_transports", 00:07:02.391 "vhost_controller_set_coalescing", 00:07:02.391 "vhost_get_controllers", 00:07:02.391 "vhost_delete_controller", 00:07:02.391 "vhost_create_blk_controller", 00:07:02.391 "vhost_scsi_controller_remove_target", 00:07:02.391 "vhost_scsi_controller_add_target", 00:07:02.391 "vhost_start_scsi_controller", 00:07:02.391 "vhost_create_scsi_controller", 00:07:02.391 "ublk_recover_disk", 00:07:02.391 "ublk_get_disks", 00:07:02.391 "ublk_stop_disk", 00:07:02.391 "ublk_start_disk", 00:07:02.391 "ublk_destroy_target", 00:07:02.391 "ublk_create_target", 00:07:02.391 "nbd_get_disks", 00:07:02.391 "nbd_stop_disk", 00:07:02.391 "nbd_start_disk", 00:07:02.391 "env_dpdk_get_mem_stats", 00:07:02.391 "nvmf_stop_mdns_prr", 00:07:02.391 "nvmf_publish_mdns_prr", 00:07:02.391 "nvmf_subsystem_get_listeners", 00:07:02.391 "nvmf_subsystem_get_qpairs", 00:07:02.391 "nvmf_subsystem_get_controllers", 00:07:02.391 "nvmf_get_stats", 00:07:02.391 "nvmf_get_transports", 00:07:02.391 "nvmf_create_transport", 00:07:02.391 "nvmf_get_targets", 00:07:02.391 "nvmf_delete_target", 00:07:02.391 "nvmf_create_target", 00:07:02.391 "nvmf_subsystem_allow_any_host", 00:07:02.391 "nvmf_subsystem_remove_host", 00:07:02.391 "nvmf_subsystem_add_host", 00:07:02.391 "nvmf_ns_remove_host", 00:07:02.391 "nvmf_ns_add_host", 00:07:02.391 "nvmf_subsystem_remove_ns", 00:07:02.391 "nvmf_subsystem_add_ns", 00:07:02.391 "nvmf_subsystem_listener_set_ana_state", 00:07:02.391 "nvmf_discovery_get_referrals", 00:07:02.391 "nvmf_discovery_remove_referral", 00:07:02.391 "nvmf_discovery_add_referral", 00:07:02.391 "nvmf_subsystem_remove_listener", 00:07:02.391 "nvmf_subsystem_add_listener", 00:07:02.391 "nvmf_delete_subsystem", 00:07:02.391 "nvmf_create_subsystem", 00:07:02.391 "nvmf_get_subsystems", 00:07:02.391 "nvmf_set_crdt", 00:07:02.391 "nvmf_set_config", 00:07:02.391 "nvmf_set_max_subsystems", 00:07:02.391 "iscsi_get_histogram", 00:07:02.391 "iscsi_enable_histogram", 00:07:02.391 "iscsi_set_options", 00:07:02.391 "iscsi_get_auth_groups", 00:07:02.391 "iscsi_auth_group_remove_secret", 00:07:02.391 "iscsi_auth_group_add_secret", 00:07:02.391 "iscsi_delete_auth_group", 00:07:02.391 "iscsi_create_auth_group", 00:07:02.391 "iscsi_set_discovery_auth", 00:07:02.391 "iscsi_get_options", 00:07:02.391 "iscsi_target_node_request_logout", 00:07:02.391 "iscsi_target_node_set_redirect", 00:07:02.391 "iscsi_target_node_set_auth", 00:07:02.391 "iscsi_target_node_add_lun", 00:07:02.391 "iscsi_get_stats", 00:07:02.391 "iscsi_get_connections", 00:07:02.391 "iscsi_portal_group_set_auth", 00:07:02.391 "iscsi_start_portal_group", 00:07:02.391 "iscsi_delete_portal_group", 00:07:02.391 "iscsi_create_portal_group", 00:07:02.391 "iscsi_get_portal_groups", 00:07:02.391 "iscsi_delete_target_node", 00:07:02.391 "iscsi_target_node_remove_pg_ig_maps", 00:07:02.391 "iscsi_target_node_add_pg_ig_maps", 00:07:02.391 "iscsi_create_target_node", 00:07:02.391 "iscsi_get_target_nodes", 00:07:02.391 "iscsi_delete_initiator_group", 00:07:02.391 "iscsi_initiator_group_remove_initiators", 00:07:02.391 "iscsi_initiator_group_add_initiators", 00:07:02.391 "iscsi_create_initiator_group", 00:07:02.391 "iscsi_get_initiator_groups", 00:07:02.391 "keyring_linux_set_options", 00:07:02.391 "keyring_file_remove_key", 00:07:02.391 "keyring_file_add_key", 00:07:02.391 "vfu_virtio_create_scsi_endpoint", 00:07:02.391 "vfu_virtio_scsi_remove_target", 00:07:02.391 "vfu_virtio_scsi_add_target", 00:07:02.391 "vfu_virtio_create_blk_endpoint", 00:07:02.391 "vfu_virtio_delete_endpoint", 00:07:02.391 "iaa_scan_accel_module", 00:07:02.391 "dsa_scan_accel_module", 00:07:02.391 "ioat_scan_accel_module", 00:07:02.391 "accel_error_inject_error", 00:07:02.391 "bdev_iscsi_delete", 00:07:02.391 "bdev_iscsi_create", 00:07:02.391 "bdev_iscsi_set_options", 00:07:02.391 "bdev_virtio_attach_controller", 00:07:02.391 "bdev_virtio_scsi_get_devices", 00:07:02.391 "bdev_virtio_detach_controller", 00:07:02.391 "bdev_virtio_blk_set_hotplug", 00:07:02.391 "bdev_ftl_set_property", 00:07:02.391 "bdev_ftl_get_properties", 00:07:02.391 "bdev_ftl_get_stats", 00:07:02.391 "bdev_ftl_unmap", 00:07:02.391 "bdev_ftl_unload", 00:07:02.391 "bdev_ftl_delete", 00:07:02.391 "bdev_ftl_load", 00:07:02.391 "bdev_ftl_create", 00:07:02.391 "bdev_aio_delete", 00:07:02.391 "bdev_aio_rescan", 00:07:02.391 "bdev_aio_create", 00:07:02.391 "blobfs_create", 00:07:02.391 "blobfs_detect", 00:07:02.391 "blobfs_set_cache_size", 00:07:02.391 "bdev_zone_block_delete", 00:07:02.391 "bdev_zone_block_create", 00:07:02.391 "bdev_delay_delete", 00:07:02.391 "bdev_delay_create", 00:07:02.391 "bdev_delay_update_latency", 00:07:02.391 "bdev_split_delete", 00:07:02.391 "bdev_split_create", 00:07:02.391 "bdev_error_inject_error", 00:07:02.391 "bdev_error_delete", 00:07:02.391 "bdev_error_create", 00:07:02.391 "bdev_raid_set_options", 00:07:02.391 "bdev_raid_remove_base_bdev", 00:07:02.391 "bdev_raid_add_base_bdev", 00:07:02.391 "bdev_raid_delete", 00:07:02.391 "bdev_raid_create", 00:07:02.391 "bdev_raid_get_bdevs", 00:07:02.391 "bdev_lvol_set_parent_bdev", 00:07:02.391 "bdev_lvol_set_parent", 00:07:02.391 "bdev_lvol_check_shallow_copy", 00:07:02.391 "bdev_lvol_start_shallow_copy", 00:07:02.391 "bdev_lvol_grow_lvstore", 00:07:02.391 "bdev_lvol_get_lvols", 00:07:02.391 "bdev_lvol_get_lvstores", 00:07:02.391 "bdev_lvol_delete", 00:07:02.391 "bdev_lvol_set_read_only", 00:07:02.391 "bdev_lvol_resize", 00:07:02.391 "bdev_lvol_decouple_parent", 00:07:02.391 "bdev_lvol_inflate", 00:07:02.391 "bdev_lvol_rename", 00:07:02.391 "bdev_lvol_clone_bdev", 00:07:02.391 "bdev_lvol_clone", 00:07:02.391 "bdev_lvol_snapshot", 00:07:02.391 "bdev_lvol_create", 00:07:02.391 "bdev_lvol_delete_lvstore", 00:07:02.391 "bdev_lvol_rename_lvstore", 00:07:02.391 "bdev_lvol_create_lvstore", 00:07:02.391 "bdev_passthru_delete", 00:07:02.391 "bdev_passthru_create", 00:07:02.391 "bdev_nvme_cuse_unregister", 00:07:02.391 "bdev_nvme_cuse_register", 00:07:02.391 "bdev_opal_new_user", 00:07:02.391 "bdev_opal_set_lock_state", 00:07:02.391 "bdev_opal_delete", 00:07:02.391 "bdev_opal_get_info", 00:07:02.391 "bdev_opal_create", 00:07:02.391 "bdev_nvme_opal_revert", 00:07:02.391 "bdev_nvme_opal_init", 00:07:02.391 "bdev_nvme_send_cmd", 00:07:02.391 "bdev_nvme_get_path_iostat", 00:07:02.391 "bdev_nvme_get_mdns_discovery_info", 00:07:02.391 "bdev_nvme_stop_mdns_discovery", 00:07:02.391 "bdev_nvme_start_mdns_discovery", 00:07:02.391 "bdev_nvme_set_multipath_policy", 00:07:02.391 "bdev_nvme_set_preferred_path", 00:07:02.391 "bdev_nvme_get_io_paths", 00:07:02.391 "bdev_nvme_remove_error_injection", 00:07:02.392 "bdev_nvme_add_error_injection", 00:07:02.392 "bdev_nvme_get_discovery_info", 00:07:02.392 "bdev_nvme_stop_discovery", 00:07:02.392 "bdev_nvme_start_discovery", 00:07:02.392 "bdev_nvme_get_controller_health_info", 00:07:02.392 "bdev_nvme_disable_controller", 00:07:02.392 "bdev_nvme_enable_controller", 00:07:02.392 "bdev_nvme_reset_controller", 00:07:02.392 "bdev_nvme_get_transport_statistics", 00:07:02.392 "bdev_nvme_apply_firmware", 00:07:02.392 "bdev_nvme_detach_controller", 00:07:02.392 "bdev_nvme_get_controllers", 00:07:02.392 "bdev_nvme_attach_controller", 00:07:02.392 "bdev_nvme_set_hotplug", 00:07:02.392 "bdev_nvme_set_options", 00:07:02.392 "bdev_null_resize", 00:07:02.392 "bdev_null_delete", 00:07:02.392 "bdev_null_create", 00:07:02.392 "bdev_malloc_delete", 00:07:02.392 "bdev_malloc_create" 00:07:02.392 ] 00:07:02.650 18:22:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.650 18:22:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:02.650 18:22:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3802545 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3802545 ']' 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3802545 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802545 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802545' 00:07:02.650 killing process with pid 3802545 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3802545 00:07:02.650 18:22:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3802545 00:07:02.908 00:07:02.908 real 0m1.800s 00:07:02.908 user 0m3.360s 00:07:02.908 sys 0m0.603s 00:07:02.908 18:22:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.908 18:22:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.908 ************************************ 00:07:02.908 END TEST spdkcli_tcp 00:07:02.908 ************************************ 00:07:02.908 18:22:21 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.908 18:22:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:02.908 18:22:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.908 18:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.908 18:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:03.166 ************************************ 00:07:03.166 START TEST dpdk_mem_utility 00:07:03.166 ************************************ 00:07:03.166 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:03.166 * Looking for test storage... 00:07:03.166 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:07:03.166 18:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:03.166 18:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3802797 00:07:03.166 18:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3802797 00:07:03.166 18:22:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.167 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3802797 ']' 00:07:03.167 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.167 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.167 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.167 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.167 18:22:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.167 [2024-07-21 18:22:21.255352] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:03.167 [2024-07-21 18:22:21.255426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802797 ] 00:07:03.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.167 [2024-07-21 18:22:21.360383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.425 [2024-07-21 18:22:21.461557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.999 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.999 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:03.999 18:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:03.999 18:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:03.999 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.999 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.999 { 00:07:03.999 "filename": "/tmp/spdk_mem_dump.txt" 00:07:03.999 } 00:07:03.999 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.999 18:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:04.261 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:04.261 1 heaps totaling size 814.000000 MiB 00:07:04.261 size: 814.000000 MiB heap id: 0 00:07:04.261 end heaps---------- 00:07:04.261 8 mempools totaling size 598.116089 MiB 00:07:04.261 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:04.261 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:04.261 size: 84.521057 MiB name: bdev_io_3802797 00:07:04.261 size: 51.011292 MiB name: evtpool_3802797 00:07:04.261 size: 50.003479 MiB name: msgpool_3802797 00:07:04.261 size: 21.763794 MiB name: PDU_Pool 00:07:04.261 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:04.261 size: 0.026123 MiB name: Session_Pool 00:07:04.261 end mempools------- 00:07:04.261 6 memzones totaling size 4.142822 MiB 00:07:04.261 size: 1.000366 MiB name: RG_ring_0_3802797 00:07:04.261 size: 1.000366 MiB name: RG_ring_1_3802797 00:07:04.261 size: 1.000366 MiB name: RG_ring_4_3802797 00:07:04.261 size: 1.000366 MiB name: RG_ring_5_3802797 00:07:04.261 size: 0.125366 MiB name: RG_ring_2_3802797 00:07:04.261 size: 0.015991 MiB name: RG_ring_3_3802797 00:07:04.261 end memzones------- 00:07:04.261 18:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:04.261 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:04.261 list of free elements. size: 12.519348 MiB 00:07:04.261 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:04.261 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:04.261 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:04.261 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:04.261 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:04.261 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:04.261 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:04.261 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:04.261 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:04.261 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:04.261 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:04.261 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:04.261 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:04.261 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:04.261 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:04.261 list of standard malloc elements. size: 199.218079 MiB 00:07:04.261 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:04.261 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:04.261 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:04.261 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:04.262 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:04.262 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:04.262 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:04.262 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:04.262 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:04.262 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:04.262 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:04.262 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:04.262 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:04.262 list of memzone associated elements. size: 602.262573 MiB 00:07:04.262 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:04.262 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:04.262 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:04.262 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:04.262 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:04.262 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3802797_0 00:07:04.262 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:04.262 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3802797_0 00:07:04.262 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:04.262 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3802797_0 00:07:04.262 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:04.262 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:04.262 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:04.262 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:04.262 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:04.262 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3802797 00:07:04.262 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:04.262 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3802797 00:07:04.262 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:04.262 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3802797 00:07:04.262 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:04.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:04.262 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:04.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:04.262 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:04.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:04.262 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:04.262 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:04.262 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:04.262 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3802797 00:07:04.262 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:04.262 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3802797 00:07:04.262 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:04.262 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3802797 00:07:04.262 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:04.262 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3802797 00:07:04.262 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:04.262 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3802797 00:07:04.262 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:04.262 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:04.262 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:04.262 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:04.262 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:04.262 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:04.262 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:04.262 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3802797 00:07:04.262 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:04.262 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:04.262 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:04.262 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:04.262 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:04.262 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3802797 00:07:04.262 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:04.262 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:04.262 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:04.262 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3802797 00:07:04.262 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:04.262 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3802797 00:07:04.262 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:04.262 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:04.262 18:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:04.262 18:22:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3802797 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3802797 ']' 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3802797 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802797 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802797' 00:07:04.262 killing process with pid 3802797 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3802797 00:07:04.262 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3802797 00:07:04.521 00:07:04.521 real 0m1.590s 00:07:04.521 user 0m1.697s 00:07:04.521 sys 0m0.487s 00:07:04.521 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.521 18:22:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:04.521 ************************************ 00:07:04.521 END TEST dpdk_mem_utility 00:07:04.521 ************************************ 00:07:04.779 18:22:22 -- common/autotest_common.sh@1142 -- # return 0 00:07:04.779 18:22:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:04.779 18:22:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.779 18:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.779 18:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:04.779 ************************************ 00:07:04.779 START TEST event 00:07:04.779 ************************************ 00:07:04.779 18:22:22 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:04.779 * Looking for test storage... 00:07:04.779 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:04.779 18:22:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:04.779 18:22:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:04.779 18:22:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:04.779 18:22:22 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:04.779 18:22:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.779 18:22:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.779 ************************************ 00:07:04.779 START TEST event_perf 00:07:04.779 ************************************ 00:07:04.779 18:22:22 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:04.779 Running I/O for 1 seconds...[2024-07-21 18:22:22.952740] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:04.779 [2024-07-21 18:22:22.952820] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803157 ] 00:07:05.036 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.036 [2024-07-21 18:22:23.070760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.036 [2024-07-21 18:22:23.176305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.036 [2024-07-21 18:22:23.176389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.036 [2024-07-21 18:22:23.176493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.036 [2024-07-21 18:22:23.176494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.410 Running I/O for 1 seconds... 00:07:06.410 lcore 0: 168330 00:07:06.410 lcore 1: 168330 00:07:06.410 lcore 2: 168330 00:07:06.410 lcore 3: 168331 00:07:06.410 done. 00:07:06.410 00:07:06.410 real 0m1.323s 00:07:06.410 user 0m4.184s 00:07:06.410 sys 0m0.132s 00:07:06.410 18:22:24 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.410 18:22:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.410 ************************************ 00:07:06.410 END TEST event_perf 00:07:06.410 ************************************ 00:07:06.410 18:22:24 event -- common/autotest_common.sh@1142 -- # return 0 00:07:06.410 18:22:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:06.410 18:22:24 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:06.410 18:22:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.410 18:22:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.410 ************************************ 00:07:06.410 START TEST event_reactor 00:07:06.410 ************************************ 00:07:06.410 18:22:24 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:06.410 [2024-07-21 18:22:24.362187] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:06.410 [2024-07-21 18:22:24.362315] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803381 ] 00:07:06.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.410 [2024-07-21 18:22:24.482306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.410 [2024-07-21 18:22:24.582848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.786 test_start 00:07:07.786 oneshot 00:07:07.786 tick 100 00:07:07.786 tick 100 00:07:07.786 tick 250 00:07:07.786 tick 100 00:07:07.786 tick 100 00:07:07.786 tick 100 00:07:07.786 tick 250 00:07:07.786 tick 500 00:07:07.786 tick 100 00:07:07.786 tick 100 00:07:07.786 tick 250 00:07:07.786 tick 100 00:07:07.786 tick 100 00:07:07.786 test_end 00:07:07.786 00:07:07.786 real 0m1.323s 00:07:07.786 user 0m1.181s 00:07:07.786 sys 0m0.136s 00:07:07.786 18:22:25 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.786 18:22:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:07.786 ************************************ 00:07:07.786 END TEST event_reactor 00:07:07.786 ************************************ 00:07:07.786 18:22:25 event -- common/autotest_common.sh@1142 -- # return 0 00:07:07.786 18:22:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:07.786 18:22:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.786 18:22:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.786 18:22:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.786 ************************************ 00:07:07.786 START TEST event_reactor_perf 00:07:07.786 ************************************ 00:07:07.786 18:22:25 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:07.786 [2024-07-21 18:22:25.765754] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:07.786 [2024-07-21 18:22:25.765865] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803591 ] 00:07:07.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.786 [2024-07-21 18:22:25.886813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.786 [2024-07-21 18:22:25.982853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.163 test_start 00:07:09.163 test_end 00:07:09.163 Performance: 597144 events per second 00:07:09.163 00:07:09.163 real 0m1.321s 00:07:09.163 user 0m1.182s 00:07:09.163 sys 0m0.132s 00:07:09.163 18:22:27 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.163 18:22:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 ************************************ 00:07:09.163 END TEST event_reactor_perf 00:07:09.163 ************************************ 00:07:09.163 18:22:27 event -- common/autotest_common.sh@1142 -- # return 0 00:07:09.163 18:22:27 event -- event/event.sh@49 -- # uname -s 00:07:09.163 18:22:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:09.163 18:22:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:09.163 18:22:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.163 18:22:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.163 18:22:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 ************************************ 00:07:09.163 START TEST event_scheduler 00:07:09.163 ************************************ 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:09.163 * Looking for test storage... 00:07:09.163 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:07:09.163 18:22:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:09.163 18:22:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3803814 00:07:09.163 18:22:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.163 18:22:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:09.163 18:22:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3803814 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3803814 ']' 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.163 18:22:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 [2024-07-21 18:22:27.282546] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:09.163 [2024-07-21 18:22:27.282631] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803814 ] 00:07:09.163 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.422 [2024-07-21 18:22:27.378426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.422 [2024-07-21 18:22:27.463779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.422 [2024-07-21 18:22:27.463856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.422 [2024-07-21 18:22:27.463958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.422 [2024-07-21 18:22:27.463958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:09.989 18:22:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.989 [2024-07-21 18:22:28.194659] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:09.989 [2024-07-21 18:22:28.194681] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:09.989 [2024-07-21 18:22:28.194693] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:09.989 [2024-07-21 18:22:28.194701] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:09.989 [2024-07-21 18:22:28.194708] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.989 18:22:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.989 18:22:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 [2024-07-21 18:22:28.274572] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:10.248 18:22:28 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:10.248 18:22:28 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.248 18:22:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 ************************************ 00:07:10.248 START TEST scheduler_create_thread 00:07:10.248 ************************************ 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 2 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 3 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 4 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 5 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 6 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 7 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.248 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.248 8 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.249 9 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.249 10 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.249 18:22:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.151 18:22:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.151 18:22:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:12.151 18:22:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:12.151 18:22:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.151 18:22:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.087 18:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.087 00:07:13.087 real 0m2.621s 00:07:13.087 user 0m0.023s 00:07:13.087 sys 0m0.009s 00:07:13.087 18:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.087 18:22:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.087 ************************************ 00:07:13.087 END TEST scheduler_create_thread 00:07:13.087 ************************************ 00:07:13.087 18:22:30 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:13.087 18:22:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:13.087 18:22:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3803814 00:07:13.087 18:22:30 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3803814 ']' 00:07:13.087 18:22:30 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3803814 00:07:13.087 18:22:30 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:13.087 18:22:30 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.087 18:22:30 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803814 00:07:13.087 18:22:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:13.087 18:22:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:13.087 18:22:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803814' 00:07:13.087 killing process with pid 3803814 00:07:13.087 18:22:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3803814 00:07:13.087 18:22:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3803814 00:07:13.346 [2024-07-21 18:22:31.417076] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:13.605 00:07:13.605 real 0m4.471s 00:07:13.605 user 0m8.545s 00:07:13.605 sys 0m0.459s 00:07:13.605 18:22:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.606 18:22:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.606 ************************************ 00:07:13.606 END TEST event_scheduler 00:07:13.606 ************************************ 00:07:13.606 18:22:31 event -- common/autotest_common.sh@1142 -- # return 0 00:07:13.606 18:22:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:13.606 18:22:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:13.606 18:22:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.606 18:22:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.606 18:22:31 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.606 ************************************ 00:07:13.606 START TEST app_repeat 00:07:13.606 ************************************ 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3804401 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3804401' 00:07:13.606 Process app_repeat pid: 3804401 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:13.606 spdk_app_start Round 0 00:07:13.606 18:22:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3804401 /var/tmp/spdk-nbd.sock 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3804401 ']' 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.606 18:22:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.606 [2024-07-21 18:22:31.753639] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:13.606 [2024-07-21 18:22:31.753739] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804401 ] 00:07:13.606 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.865 [2024-07-21 18:22:31.878487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.865 [2024-07-21 18:22:31.981682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.865 [2024-07-21 18:22:31.981687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.802 18:22:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.802 18:22:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:14.802 18:22:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.802 Malloc0 00:07:14.802 18:22:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.060 Malloc1 00:07:15.060 18:22:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.060 18:22:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.318 /dev/nbd0 00:07:15.318 18:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.318 18:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.318 1+0 records in 00:07:15.318 1+0 records out 00:07:15.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260342 s, 15.7 MB/s 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:15.318 18:22:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:15.318 18:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.318 18:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.318 18:22:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:15.577 /dev/nbd1 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.577 1+0 records in 00:07:15.577 1+0 records out 00:07:15.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226092 s, 18.1 MB/s 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:15.577 18:22:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.577 18:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.835 { 00:07:15.835 "nbd_device": "/dev/nbd0", 00:07:15.835 "bdev_name": "Malloc0" 00:07:15.835 }, 00:07:15.835 { 00:07:15.835 "nbd_device": "/dev/nbd1", 00:07:15.835 "bdev_name": "Malloc1" 00:07:15.835 } 00:07:15.835 ]' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.835 { 00:07:15.835 "nbd_device": "/dev/nbd0", 00:07:15.835 "bdev_name": "Malloc0" 00:07:15.835 }, 00:07:15.835 { 00:07:15.835 "nbd_device": "/dev/nbd1", 00:07:15.835 "bdev_name": "Malloc1" 00:07:15.835 } 00:07:15.835 ]' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.835 /dev/nbd1' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.835 /dev/nbd1' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:15.835 256+0 records in 00:07:15.835 256+0 records out 00:07:15.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669557 s, 157 MB/s 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.835 18:22:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.835 256+0 records in 00:07:15.835 256+0 records out 00:07:15.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308838 s, 34.0 MB/s 00:07:15.835 18:22:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.835 18:22:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.094 256+0 records in 00:07:16.094 256+0 records out 00:07:16.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0334011 s, 31.4 MB/s 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.094 18:22:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.352 18:22:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.611 18:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:16.869 18:22:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:16.869 18:22:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.128 18:22:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.387 [2024-07-21 18:22:35.435303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.387 [2024-07-21 18:22:35.533500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.387 [2024-07-21 18:22:35.533505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.387 [2024-07-21 18:22:35.584315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.387 [2024-07-21 18:22:35.584372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.718 18:22:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:20.718 18:22:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:20.718 spdk_app_start Round 1 00:07:20.718 18:22:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3804401 /var/tmp/spdk-nbd.sock 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3804401 ']' 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.718 18:22:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:20.718 18:22:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.718 Malloc0 00:07:20.718 18:22:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.976 Malloc1 00:07:20.976 18:22:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.976 18:22:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.977 18:22:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:20.977 /dev/nbd0 00:07:21.236 18:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.236 18:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.236 1+0 records in 00:07:21.236 1+0 records out 00:07:21.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164965 s, 24.8 MB/s 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.236 18:22:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:21.236 18:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.236 18:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.236 18:22:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:21.495 /dev/nbd1 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.495 1+0 records in 00:07:21.495 1+0 records out 00:07:21.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292725 s, 14.0 MB/s 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.495 18:22:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.495 18:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.754 { 00:07:21.754 "nbd_device": "/dev/nbd0", 00:07:21.754 "bdev_name": "Malloc0" 00:07:21.754 }, 00:07:21.754 { 00:07:21.754 "nbd_device": "/dev/nbd1", 00:07:21.754 "bdev_name": "Malloc1" 00:07:21.754 } 00:07:21.754 ]' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.754 { 00:07:21.754 "nbd_device": "/dev/nbd0", 00:07:21.754 "bdev_name": "Malloc0" 00:07:21.754 }, 00:07:21.754 { 00:07:21.754 "nbd_device": "/dev/nbd1", 00:07:21.754 "bdev_name": "Malloc1" 00:07:21.754 } 00:07:21.754 ]' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.754 /dev/nbd1' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.754 /dev/nbd1' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:21.754 256+0 records in 00:07:21.754 256+0 records out 00:07:21.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104183 s, 101 MB/s 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.754 256+0 records in 00:07:21.754 256+0 records out 00:07:21.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306697 s, 34.2 MB/s 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.754 256+0 records in 00:07:21.754 256+0 records out 00:07:21.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327236 s, 32.0 MB/s 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.754 18:22:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.013 18:22:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.272 18:22:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.530 18:22:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.530 18:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.530 18:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.788 18:22:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.788 18:22:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:23.047 18:22:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:23.306 [2024-07-21 18:22:41.280692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.306 [2024-07-21 18:22:41.376127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.306 [2024-07-21 18:22:41.376132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.306 [2024-07-21 18:22:41.425822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.306 [2024-07-21 18:22:41.425879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:25.840 18:22:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:25.840 18:22:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:25.840 spdk_app_start Round 2 00:07:25.840 18:22:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3804401 /var/tmp/spdk-nbd.sock 00:07:25.840 18:22:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3804401 ']' 00:07:25.840 18:22:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.101 18:22:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.101 18:22:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.101 18:22:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.101 18:22:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.101 18:22:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.101 18:22:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:26.101 18:22:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.359 Malloc0 00:07:26.359 18:22:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.615 Malloc1 00:07:26.615 18:22:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.615 18:22:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.616 18:22:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.616 18:22:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.616 18:22:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.616 18:22:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.616 18:22:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:26.874 /dev/nbd0 00:07:26.874 18:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.874 18:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.874 1+0 records in 00:07:26.874 1+0 records out 00:07:26.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247997 s, 16.5 MB/s 00:07:26.874 18:22:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:27.131 18:22:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:27.131 18:22:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:27.131 18:22:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:27.131 18:22:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:27.131 18:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.131 18:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.132 18:22:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:27.132 /dev/nbd1 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.390 1+0 records in 00:07:27.390 1+0 records out 00:07:27.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021227 s, 19.3 MB/s 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:27.390 18:22:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.390 { 00:07:27.390 "nbd_device": "/dev/nbd0", 00:07:27.390 "bdev_name": "Malloc0" 00:07:27.390 }, 00:07:27.390 { 00:07:27.390 "nbd_device": "/dev/nbd1", 00:07:27.390 "bdev_name": "Malloc1" 00:07:27.390 } 00:07:27.390 ]' 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.390 { 00:07:27.390 "nbd_device": "/dev/nbd0", 00:07:27.390 "bdev_name": "Malloc0" 00:07:27.390 }, 00:07:27.390 { 00:07:27.390 "nbd_device": "/dev/nbd1", 00:07:27.390 "bdev_name": "Malloc1" 00:07:27.390 } 00:07:27.390 ]' 00:07:27.390 18:22:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:27.648 /dev/nbd1' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:27.648 /dev/nbd1' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:27.648 256+0 records in 00:07:27.648 256+0 records out 00:07:27.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100619 s, 104 MB/s 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:27.648 256+0 records in 00:07:27.648 256+0 records out 00:07:27.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262399 s, 40.0 MB/s 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.648 256+0 records in 00:07:27.648 256+0 records out 00:07:27.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326297 s, 32.1 MB/s 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.648 18:22:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.907 18:22:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.907 18:22:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.907 18:22:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.907 18:22:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.907 18:22:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.907 18:22:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.907 18:22:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.907 18:22:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.907 18:22:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.907 18:22:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.164 18:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.421 18:22:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.421 18:22:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.679 18:22:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:28.937 [2024-07-21 18:22:47.135922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.196 [2024-07-21 18:22:47.235973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.196 [2024-07-21 18:22:47.235977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.196 [2024-07-21 18:22:47.288554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.196 [2024-07-21 18:22:47.288612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.746 18:22:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3804401 /var/tmp/spdk-nbd.sock 00:07:31.746 18:22:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3804401 ']' 00:07:31.746 18:22:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.746 18:22:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.746 18:22:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.746 18:22:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.746 18:22:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:32.005 18:22:50 event.app_repeat -- event/event.sh@39 -- # killprocess 3804401 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3804401 ']' 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3804401 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3804401 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3804401' 00:07:32.005 killing process with pid 3804401 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3804401 00:07:32.005 18:22:50 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3804401 00:07:32.265 spdk_app_start is called in Round 0. 00:07:32.265 Shutdown signal received, stop current app iteration 00:07:32.265 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 reinitialization... 00:07:32.265 spdk_app_start is called in Round 1. 00:07:32.265 Shutdown signal received, stop current app iteration 00:07:32.265 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 reinitialization... 00:07:32.265 spdk_app_start is called in Round 2. 00:07:32.265 Shutdown signal received, stop current app iteration 00:07:32.265 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 reinitialization... 00:07:32.265 spdk_app_start is called in Round 3. 00:07:32.265 Shutdown signal received, stop current app iteration 00:07:32.265 18:22:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:32.265 18:22:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:32.265 00:07:32.265 real 0m18.664s 00:07:32.265 user 0m40.133s 00:07:32.265 sys 0m4.116s 00:07:32.265 18:22:50 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.265 18:22:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.265 ************************************ 00:07:32.265 END TEST app_repeat 00:07:32.265 ************************************ 00:07:32.265 18:22:50 event -- common/autotest_common.sh@1142 -- # return 0 00:07:32.265 18:22:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:32.265 18:22:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:32.265 18:22:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.265 18:22:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.265 18:22:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.265 ************************************ 00:07:32.265 START TEST cpu_locks 00:07:32.265 ************************************ 00:07:32.265 18:22:50 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:32.524 * Looking for test storage... 00:07:32.524 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:32.524 18:22:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:32.524 18:22:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:32.524 18:22:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:32.524 18:22:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:32.524 18:22:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.524 18:22:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.524 18:22:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.524 ************************************ 00:07:32.524 START TEST default_locks 00:07:32.524 ************************************ 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3807079 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3807079 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3807079 ']' 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.524 18:22:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.524 [2024-07-21 18:22:50.623815] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:32.524 [2024-07-21 18:22:50.623912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807079 ] 00:07:32.524 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.784 [2024-07-21 18:22:50.744826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.784 [2024-07-21 18:22:50.841780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.720 18:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.720 18:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:33.720 18:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3807079 00:07:33.720 18:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3807079 00:07:33.720 18:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.287 lslocks: write error 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3807079 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3807079 ']' 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3807079 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807079 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807079' 00:07:34.287 killing process with pid 3807079 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3807079 00:07:34.287 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3807079 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3807079 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3807079 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3807079 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3807079 ']' 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.854 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3807079) - No such process 00:07:34.854 ERROR: process (pid: 3807079) is no longer running 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:34.854 00:07:34.854 real 0m2.250s 00:07:34.854 user 0m2.397s 00:07:34.854 sys 0m0.847s 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.854 18:22:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.854 ************************************ 00:07:34.854 END TEST default_locks 00:07:34.854 ************************************ 00:07:34.854 18:22:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:34.854 18:22:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:34.854 18:22:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.854 18:22:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.854 18:22:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.854 ************************************ 00:07:34.854 START TEST default_locks_via_rpc 00:07:34.854 ************************************ 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3807459 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3807459 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3807459 ']' 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.854 18:22:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.854 [2024-07-21 18:22:52.957801] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:34.854 [2024-07-21 18:22:52.957869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807459 ] 00:07:34.854 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.113 [2024-07-21 18:22:53.080309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.113 [2024-07-21 18:22:53.178615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3807459 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3807459 00:07:36.050 18:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.309 18:22:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3807459 00:07:36.309 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3807459 ']' 00:07:36.309 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3807459 00:07:36.309 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:36.309 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.309 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807459 00:07:36.567 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.567 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.567 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807459' 00:07:36.567 killing process with pid 3807459 00:07:36.567 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3807459 00:07:36.567 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3807459 00:07:36.825 00:07:36.825 real 0m2.000s 00:07:36.825 user 0m2.135s 00:07:36.825 sys 0m0.735s 00:07:36.825 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.825 18:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.825 ************************************ 00:07:36.825 END TEST default_locks_via_rpc 00:07:36.825 ************************************ 00:07:36.825 18:22:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:36.825 18:22:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:36.825 18:22:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.825 18:22:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.825 18:22:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.825 ************************************ 00:07:36.825 START TEST non_locking_app_on_locked_coremask 00:07:36.825 ************************************ 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3807673 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3807673 /var/tmp/spdk.sock 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3807673 ']' 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.825 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.825 [2024-07-21 18:22:55.039267] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:36.825 [2024-07-21 18:22:55.039348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807673 ] 00:07:37.083 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.083 [2024-07-21 18:22:55.157751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.083 [2024-07-21 18:22:55.260439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3807845 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3807845 /var/tmp/spdk2.sock 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3807845 ']' 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.018 18:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:38.018 [2024-07-21 18:22:55.959179] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:38.018 [2024-07-21 18:22:55.959259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807845 ] 00:07:38.018 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.018 [2024-07-21 18:22:56.116973] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.018 [2024-07-21 18:22:56.117005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.277 [2024-07-21 18:22:56.314694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.843 18:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.843 18:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:38.844 18:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3807673 00:07:38.844 18:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.844 18:22:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3807673 00:07:40.217 lslocks: write error 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3807673 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3807673 ']' 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3807673 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807673 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807673' 00:07:40.217 killing process with pid 3807673 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3807673 00:07:40.217 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3807673 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3807845 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3807845 ']' 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3807845 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807845 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807845' 00:07:40.782 killing process with pid 3807845 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3807845 00:07:40.782 18:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3807845 00:07:41.039 00:07:41.039 real 0m4.222s 00:07:41.039 user 0m4.538s 00:07:41.039 sys 0m1.413s 00:07:41.039 18:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.039 18:22:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.039 ************************************ 00:07:41.039 END TEST non_locking_app_on_locked_coremask 00:07:41.039 ************************************ 00:07:41.297 18:22:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:41.297 18:22:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:41.297 18:22:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.297 18:22:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.297 18:22:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.297 ************************************ 00:07:41.297 START TEST locking_app_on_unlocked_coremask 00:07:41.297 ************************************ 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3808325 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3808325 /var/tmp/spdk.sock 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3808325 ']' 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.297 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.297 [2024-07-21 18:22:59.346478] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:41.297 [2024-07-21 18:22:59.346554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808325 ] 00:07:41.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.297 [2024-07-21 18:22:59.454088] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.297 [2024-07-21 18:22:59.454124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.555 [2024-07-21 18:22:59.551629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3808408 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3808408 /var/tmp/spdk2.sock 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3808408 ']' 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.812 18:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.812 [2024-07-21 18:22:59.797818] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:41.812 [2024-07-21 18:22:59.797894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808408 ] 00:07:41.812 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.812 [2024-07-21 18:22:59.959270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.070 [2024-07-21 18:23:00.171879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.692 18:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.692 18:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:42.692 18:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3808408 00:07:42.692 18:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3808408 00:07:42.692 18:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.629 lslocks: write error 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3808325 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3808325 ']' 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3808325 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808325 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808325' 00:07:43.629 killing process with pid 3808325 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3808325 00:07:43.629 18:23:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3808325 00:07:44.195 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3808408 00:07:44.195 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3808408 ']' 00:07:44.195 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3808408 00:07:44.195 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:44.195 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.195 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808408 00:07:44.453 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.453 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.453 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808408' 00:07:44.453 killing process with pid 3808408 00:07:44.453 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3808408 00:07:44.453 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3808408 00:07:44.711 00:07:44.711 real 0m3.483s 00:07:44.711 user 0m3.647s 00:07:44.711 sys 0m1.243s 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.711 ************************************ 00:07:44.711 END TEST locking_app_on_unlocked_coremask 00:07:44.711 ************************************ 00:07:44.711 18:23:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:44.711 18:23:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:44.711 18:23:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.711 18:23:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.711 18:23:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.711 ************************************ 00:07:44.711 START TEST locking_app_on_locked_coremask 00:07:44.711 ************************************ 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3808800 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3808800 /var/tmp/spdk.sock 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3808800 ']' 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.711 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.712 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.712 18:23:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.712 [2024-07-21 18:23:02.909748] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:44.712 [2024-07-21 18:23:02.909829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808800 ] 00:07:44.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.970 [2024-07-21 18:23:03.028444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.970 [2024-07-21 18:23:03.134440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3808976 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3808976 /var/tmp/spdk2.sock 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3808976 /var/tmp/spdk2.sock 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3808976 /var/tmp/spdk2.sock 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3808976 ']' 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.898 18:23:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.898 [2024-07-21 18:23:03.901076] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:45.898 [2024-07-21 18:23:03.901151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808976 ] 00:07:45.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.898 [2024-07-21 18:23:04.060036] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3808800 has claimed it. 00:07:45.898 [2024-07-21 18:23:04.060084] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:46.463 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3808976) - No such process 00:07:46.463 ERROR: process (pid: 3808976) is no longer running 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3808800 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3808800 00:07:46.463 18:23:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.397 lslocks: write error 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3808800 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3808800 ']' 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3808800 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808800 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808800' 00:07:47.397 killing process with pid 3808800 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3808800 00:07:47.397 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3808800 00:07:47.655 00:07:47.655 real 0m2.857s 00:07:47.655 user 0m3.156s 00:07:47.655 sys 0m0.967s 00:07:47.655 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.655 18:23:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.655 ************************************ 00:07:47.655 END TEST locking_app_on_locked_coremask 00:07:47.655 ************************************ 00:07:47.655 18:23:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:47.655 18:23:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:47.655 18:23:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.655 18:23:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.655 18:23:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.655 ************************************ 00:07:47.655 START TEST locking_overlapped_coremask 00:07:47.655 ************************************ 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3809190 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3809190 /var/tmp/spdk.sock 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3809190 ']' 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.655 18:23:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.655 [2024-07-21 18:23:05.840177] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:47.655 [2024-07-21 18:23:05.840261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809190 ] 00:07:47.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.914 [2024-07-21 18:23:05.960018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.914 [2024-07-21 18:23:06.067060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.914 [2024-07-21 18:23:06.067081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.914 [2024-07-21 18:23:06.067085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3809364 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3809364 /var/tmp/spdk2.sock 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3809364 /var/tmp/spdk2.sock 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3809364 /var/tmp/spdk2.sock 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3809364 ']' 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.850 18:23:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.850 [2024-07-21 18:23:06.846090] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:48.850 [2024-07-21 18:23:06.846180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809364 ] 00:07:48.850 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.850 [2024-07-21 18:23:06.972652] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3809190 has claimed it. 00:07:48.850 [2024-07-21 18:23:06.972689] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:49.419 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3809364) - No such process 00:07:49.419 ERROR: process (pid: 3809364) is no longer running 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3809190 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3809190 ']' 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3809190 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809190 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809190' 00:07:49.419 killing process with pid 3809190 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3809190 00:07:49.419 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3809190 00:07:49.988 00:07:49.988 real 0m2.110s 00:07:49.988 user 0m5.805s 00:07:49.988 sys 0m0.586s 00:07:49.988 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.988 18:23:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 ************************************ 00:07:49.988 END TEST locking_overlapped_coremask 00:07:49.988 ************************************ 00:07:49.988 18:23:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:49.988 18:23:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:49.988 18:23:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.988 18:23:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.988 18:23:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 ************************************ 00:07:49.988 START TEST locking_overlapped_coremask_via_rpc 00:07:49.988 ************************************ 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3809568 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3809568 /var/tmp/spdk.sock 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3809568 ']' 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.988 18:23:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 [2024-07-21 18:23:08.037777] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:49.988 [2024-07-21 18:23:08.037839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809568 ] 00:07:49.988 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.988 [2024-07-21 18:23:08.156305] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.988 [2024-07-21 18:23:08.156344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.247 [2024-07-21 18:23:08.265303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.247 [2024-07-21 18:23:08.265387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.247 [2024-07-21 18:23:08.265392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3809746 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3809746 /var/tmp/spdk2.sock 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3809746 ']' 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.815 18:23:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.074 [2024-07-21 18:23:09.046474] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:51.074 [2024-07-21 18:23:09.046567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809746 ] 00:07:51.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.074 [2024-07-21 18:23:09.176148] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.074 [2024-07-21 18:23:09.176176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.333 [2024-07-21 18:23:09.348357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.333 [2024-07-21 18:23:09.348444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.333 [2024-07-21 18:23:09.348445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 [2024-07-21 18:23:10.032282] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3809568 has claimed it. 00:07:51.900 request: 00:07:51.900 { 00:07:51.900 "method": "framework_enable_cpumask_locks", 00:07:51.900 "req_id": 1 00:07:51.900 } 00:07:51.900 Got JSON-RPC error response 00:07:51.900 response: 00:07:51.900 { 00:07:51.900 "code": -32603, 00:07:51.900 "message": "Failed to claim CPU core: 2" 00:07:51.900 } 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3809568 /var/tmp/spdk.sock 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3809568 ']' 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.900 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3809746 /var/tmp/spdk2.sock 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3809746 ']' 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.158 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.415 00:07:52.415 real 0m2.554s 00:07:52.415 user 0m1.212s 00:07:52.415 sys 0m0.269s 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.415 18:23:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.415 ************************************ 00:07:52.415 END TEST locking_overlapped_coremask_via_rpc 00:07:52.415 ************************************ 00:07:52.415 18:23:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.415 18:23:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:52.415 18:23:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3809568 ]] 00:07:52.415 18:23:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3809568 00:07:52.415 18:23:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3809568 ']' 00:07:52.415 18:23:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3809568 00:07:52.415 18:23:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:52.415 18:23:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.415 18:23:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809568 00:07:52.673 18:23:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.673 18:23:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.673 18:23:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809568' 00:07:52.673 killing process with pid 3809568 00:07:52.673 18:23:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3809568 00:07:52.673 18:23:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3809568 00:07:52.932 18:23:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3809746 ]] 00:07:52.932 18:23:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3809746 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3809746 ']' 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3809746 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809746 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809746' 00:07:52.932 killing process with pid 3809746 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3809746 00:07:52.932 18:23:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3809746 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3809568 ]] 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3809568 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3809568 ']' 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3809568 00:07:53.499 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3809568) - No such process 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3809568 is not found' 00:07:53.499 Process with pid 3809568 is not found 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3809746 ]] 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3809746 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3809746 ']' 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3809746 00:07:53.499 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3809746) - No such process 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3809746 is not found' 00:07:53.499 Process with pid 3809746 is not found 00:07:53.499 18:23:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:53.499 00:07:53.499 real 0m20.962s 00:07:53.499 user 0m35.393s 00:07:53.499 sys 0m7.203s 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.499 18:23:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.499 ************************************ 00:07:53.499 END TEST cpu_locks 00:07:53.499 ************************************ 00:07:53.499 18:23:11 event -- common/autotest_common.sh@1142 -- # return 0 00:07:53.499 00:07:53.499 real 0m48.668s 00:07:53.499 user 1m30.822s 00:07:53.499 sys 0m12.630s 00:07:53.499 18:23:11 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.499 18:23:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.499 ************************************ 00:07:53.499 END TEST event 00:07:53.499 ************************************ 00:07:53.499 18:23:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:53.499 18:23:11 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:53.499 18:23:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.499 18:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.499 18:23:11 -- common/autotest_common.sh@10 -- # set +x 00:07:53.499 ************************************ 00:07:53.499 START TEST thread 00:07:53.499 ************************************ 00:07:53.499 18:23:11 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:53.499 * Looking for test storage... 00:07:53.499 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:53.499 18:23:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:53.499 18:23:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:53.499 18:23:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.499 18:23:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.499 ************************************ 00:07:53.499 START TEST thread_poller_perf 00:07:53.499 ************************************ 00:07:53.499 18:23:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:53.757 [2024-07-21 18:23:11.715188] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:53.757 [2024-07-21 18:23:11.715290] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810198 ] 00:07:53.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.757 [2024-07-21 18:23:11.836494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.757 [2024-07-21 18:23:11.936947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.757 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:55.179 ====================================== 00:07:55.179 busy:2304895880 (cyc) 00:07:55.179 total_run_count: 540000 00:07:55.179 tsc_hz: 2300000000 (cyc) 00:07:55.179 ====================================== 00:07:55.179 poller_cost: 4268 (cyc), 1855 (nsec) 00:07:55.179 00:07:55.179 real 0m1.330s 00:07:55.179 user 0m1.192s 00:07:55.179 sys 0m0.131s 00:07:55.179 18:23:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.179 18:23:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.179 ************************************ 00:07:55.179 END TEST thread_poller_perf 00:07:55.179 ************************************ 00:07:55.179 18:23:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:55.179 18:23:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:55.179 18:23:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:55.179 18:23:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.179 18:23:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.179 ************************************ 00:07:55.179 START TEST thread_poller_perf 00:07:55.179 ************************************ 00:07:55.179 18:23:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:55.179 [2024-07-21 18:23:13.117651] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:55.180 [2024-07-21 18:23:13.117723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810390 ] 00:07:55.180 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.180 [2024-07-21 18:23:13.237073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.180 [2024-07-21 18:23:13.336888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.180 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:56.553 ====================================== 00:07:56.553 busy:2301758066 (cyc) 00:07:56.553 total_run_count: 8872000 00:07:56.553 tsc_hz: 2300000000 (cyc) 00:07:56.553 ====================================== 00:07:56.553 poller_cost: 259 (cyc), 112 (nsec) 00:07:56.553 00:07:56.553 real 0m1.317s 00:07:56.553 user 0m1.175s 00:07:56.553 sys 0m0.136s 00:07:56.553 18:23:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.553 18:23:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:56.553 ************************************ 00:07:56.553 END TEST thread_poller_perf 00:07:56.553 ************************************ 00:07:56.553 18:23:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:56.553 18:23:14 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:56.553 18:23:14 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:56.553 18:23:14 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.553 18:23:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.553 18:23:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.553 ************************************ 00:07:56.553 START TEST thread_spdk_lock 00:07:56.553 ************************************ 00:07:56.553 18:23:14 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:56.553 [2024-07-21 18:23:14.519516] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:56.553 [2024-07-21 18:23:14.519602] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810590 ] 00:07:56.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.553 [2024-07-21 18:23:14.640934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:56.553 [2024-07-21 18:23:14.742114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.553 [2024-07-21 18:23:14.742120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.119 [2024-07-21 18:23:15.250424] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:57.120 [2024-07-21 18:23:15.250467] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:57.120 [2024-07-21 18:23:15.250484] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14d3640 00:07:57.120 [2024-07-21 18:23:15.251472] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:57.120 [2024-07-21 18:23:15.251578] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:57.120 [2024-07-21 18:23:15.251604] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:57.120 Starting test contend 00:07:57.120 Worker Delay Wait us Hold us Total us 00:07:57.120 0 3 161097 193602 354699 00:07:57.120 1 5 87839 293990 381829 00:07:57.120 PASS test contend 00:07:57.120 Starting test hold_by_poller 00:07:57.120 PASS test hold_by_poller 00:07:57.120 Starting test hold_by_message 00:07:57.120 PASS test hold_by_message 00:07:57.120 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:57.120 100014 assertions passed 00:07:57.120 0 assertions failed 00:07:57.377 00:07:57.377 real 0m0.833s 00:07:57.377 user 0m1.196s 00:07:57.377 sys 0m0.141s 00:07:57.377 18:23:15 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.377 18:23:15 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:57.377 ************************************ 00:07:57.377 END TEST thread_spdk_lock 00:07:57.377 ************************************ 00:07:57.377 18:23:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:57.377 00:07:57.377 real 0m3.822s 00:07:57.377 user 0m3.690s 00:07:57.377 sys 0m0.651s 00:07:57.377 18:23:15 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.377 18:23:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.377 ************************************ 00:07:57.377 END TEST thread 00:07:57.377 ************************************ 00:07:57.377 18:23:15 -- common/autotest_common.sh@1142 -- # return 0 00:07:57.377 18:23:15 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:57.377 18:23:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.377 18:23:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.377 18:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.377 ************************************ 00:07:57.377 START TEST accel 00:07:57.377 ************************************ 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:57.377 * Looking for test storage... 00:07:57.377 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:57.377 18:23:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:57.377 18:23:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:57.377 18:23:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:57.377 18:23:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3810740 00:07:57.377 18:23:15 accel -- accel/accel.sh@63 -- # waitforlisten 3810740 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@829 -- # '[' -z 3810740 ']' 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.377 18:23:15 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.377 18:23:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.377 18:23:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.377 18:23:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.377 18:23:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.377 18:23:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.377 18:23:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.377 18:23:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.377 18:23:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:57.377 18:23:15 accel -- accel/accel.sh@41 -- # jq -r . 00:07:57.635 [2024-07-21 18:23:15.604625] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:57.635 [2024-07-21 18:23:15.604701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810740 ] 00:07:57.635 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.635 [2024-07-21 18:23:15.723388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.635 [2024-07-21 18:23:15.826703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.570 18:23:16 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.570 18:23:16 accel -- common/autotest_common.sh@862 -- # return 0 00:07:58.570 18:23:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:58.570 18:23:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:58.570 18:23:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:58.570 18:23:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:58.570 18:23:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:58.570 18:23:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:58.570 18:23:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:58.570 18:23:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.570 18:23:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.570 18:23:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.570 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.570 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.570 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.571 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.571 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.571 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.571 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.571 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.571 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.571 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.571 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.571 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.571 18:23:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:58.571 18:23:16 accel -- accel/accel.sh@72 -- # IFS== 00:07:58.571 18:23:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:58.571 18:23:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:58.571 18:23:16 accel -- accel/accel.sh@75 -- # killprocess 3810740 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@948 -- # '[' -z 3810740 ']' 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@952 -- # kill -0 3810740 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@953 -- # uname 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810740 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810740' 00:07:58.571 killing process with pid 3810740 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@967 -- # kill 3810740 00:07:58.571 18:23:16 accel -- common/autotest_common.sh@972 -- # wait 3810740 00:07:58.830 18:23:17 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:58.830 18:23:17 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:58.830 18:23:17 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.089 18:23:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.089 18:23:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 18:23:17 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:59.089 18:23:17 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:59.089 18:23:17 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.089 18:23:17 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 18:23:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.089 18:23:17 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:59.089 18:23:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.089 18:23:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.089 18:23:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 ************************************ 00:07:59.089 START TEST accel_missing_filename 00:07:59.089 ************************************ 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.089 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:59.089 18:23:17 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:59.089 [2024-07-21 18:23:17.210825] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:59.089 [2024-07-21 18:23:17.210906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811037 ] 00:07:59.089 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.347 [2024-07-21 18:23:17.333647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.347 [2024-07-21 18:23:17.432693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.347 [2024-07-21 18:23:17.476402] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.347 [2024-07-21 18:23:17.537828] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:59.604 A filename is required. 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:59.604 00:07:59.604 real 0m0.432s 00:07:59.604 user 0m0.295s 00:07:59.604 sys 0m0.179s 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.604 18:23:17 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 ************************************ 00:07:59.604 END TEST accel_missing_filename 00:07:59.604 ************************************ 00:07:59.604 18:23:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.605 18:23:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:59.605 18:23:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:59.605 18:23:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.605 18:23:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.605 ************************************ 00:07:59.605 START TEST accel_compress_verify 00:07:59.605 ************************************ 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.605 18:23:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:59.605 18:23:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:59.605 [2024-07-21 18:23:17.717346] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:59.605 [2024-07-21 18:23:17.717427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811070 ] 00:07:59.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.862 [2024-07-21 18:23:17.839062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.862 [2024-07-21 18:23:17.938624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.862 [2024-07-21 18:23:17.988503] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.862 [2024-07-21 18:23:18.061102] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:00.120 00:08:00.120 Compression does not support the verify option, aborting. 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.120 00:08:00.120 real 0m0.453s 00:08:00.120 user 0m0.296s 00:08:00.120 sys 0m0.189s 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.120 18:23:18 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 ************************************ 00:08:00.120 END TEST accel_compress_verify 00:08:00.120 ************************************ 00:08:00.120 18:23:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.120 18:23:18 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:00.120 18:23:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:00.120 18:23:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.120 18:23:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 ************************************ 00:08:00.120 START TEST accel_wrong_workload 00:08:00.121 ************************************ 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:00.121 18:23:18 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:00.121 Unsupported workload type: foobar 00:08:00.121 [2024-07-21 18:23:18.250509] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:00.121 accel_perf options: 00:08:00.121 [-h help message] 00:08:00.121 [-q queue depth per core] 00:08:00.121 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:00.121 [-T number of threads per core 00:08:00.121 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:00.121 [-t time in seconds] 00:08:00.121 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:00.121 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:00.121 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:00.121 [-l for compress/decompress workloads, name of uncompressed input file 00:08:00.121 [-S for crc32c workload, use this seed value (default 0) 00:08:00.121 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:00.121 [-f for fill workload, use this BYTE value (default 255) 00:08:00.121 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:00.121 [-y verify result if this switch is on] 00:08:00.121 [-a tasks to allocate per core (default: same value as -q)] 00:08:00.121 Can be used to spread operations across a wider range of memory. 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.121 00:08:00.121 real 0m0.030s 00:08:00.121 user 0m0.013s 00:08:00.121 sys 0m0.018s 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.121 18:23:18 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:00.121 ************************************ 00:08:00.121 END TEST accel_wrong_workload 00:08:00.121 ************************************ 00:08:00.121 Error: writing output failed: Broken pipe 00:08:00.121 18:23:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.121 18:23:18 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:00.121 18:23:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:00.121 18:23:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.121 18:23:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.379 ************************************ 00:08:00.379 START TEST accel_negative_buffers 00:08:00.379 ************************************ 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:00.379 18:23:18 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:00.379 -x option must be non-negative. 00:08:00.379 [2024-07-21 18:23:18.358670] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:00.379 accel_perf options: 00:08:00.379 [-h help message] 00:08:00.379 [-q queue depth per core] 00:08:00.379 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:00.379 [-T number of threads per core 00:08:00.379 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:00.379 [-t time in seconds] 00:08:00.379 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:00.379 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:00.379 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:00.379 [-l for compress/decompress workloads, name of uncompressed input file 00:08:00.379 [-S for crc32c workload, use this seed value (default 0) 00:08:00.379 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:00.379 [-f for fill workload, use this BYTE value (default 255) 00:08:00.379 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:00.379 [-y verify result if this switch is on] 00:08:00.379 [-a tasks to allocate per core (default: same value as -q)] 00:08:00.379 Can be used to spread operations across a wider range of memory. 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.379 00:08:00.379 real 0m0.030s 00:08:00.379 user 0m0.014s 00:08:00.379 sys 0m0.016s 00:08:00.379 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.380 18:23:18 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:00.380 ************************************ 00:08:00.380 END TEST accel_negative_buffers 00:08:00.380 ************************************ 00:08:00.380 Error: writing output failed: Broken pipe 00:08:00.380 18:23:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.380 18:23:18 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:00.380 18:23:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:00.380 18:23:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.380 18:23:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.380 ************************************ 00:08:00.380 START TEST accel_crc32c 00:08:00.380 ************************************ 00:08:00.380 18:23:18 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:00.380 18:23:18 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:00.380 [2024-07-21 18:23:18.466120] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:00.380 [2024-07-21 18:23:18.466206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811279 ] 00:08:00.380 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.380 [2024-07-21 18:23:18.585614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.638 [2024-07-21 18:23:18.688706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.638 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.639 18:23:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:02.027 18:23:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.027 00:08:02.027 real 0m1.458s 00:08:02.027 user 0m1.297s 00:08:02.027 sys 0m0.174s 00:08:02.027 18:23:19 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.027 18:23:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:02.027 ************************************ 00:08:02.027 END TEST accel_crc32c 00:08:02.027 ************************************ 00:08:02.027 18:23:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.027 18:23:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:02.027 18:23:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:02.027 18:23:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.027 18:23:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.027 ************************************ 00:08:02.027 START TEST accel_crc32c_C2 00:08:02.027 ************************************ 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:02.027 18:23:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:02.027 [2024-07-21 18:23:20.007013] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:02.027 [2024-07-21 18:23:20.007095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811494 ] 00:08:02.027 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.027 [2024-07-21 18:23:20.128861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.027 [2024-07-21 18:23:20.224754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.286 18:23:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.225 00:08:03.225 real 0m1.443s 00:08:03.225 user 0m1.286s 00:08:03.225 sys 0m0.171s 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.225 18:23:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:03.225 ************************************ 00:08:03.225 END TEST accel_crc32c_C2 00:08:03.225 ************************************ 00:08:03.484 18:23:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.484 18:23:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:03.484 18:23:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:03.484 18:23:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.484 18:23:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.484 ************************************ 00:08:03.484 START TEST accel_copy 00:08:03.484 ************************************ 00:08:03.484 18:23:21 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:03.484 18:23:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:03.485 18:23:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:03.485 [2024-07-21 18:23:21.528891] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:03.485 [2024-07-21 18:23:21.528971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811687 ] 00:08:03.485 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.485 [2024-07-21 18:23:21.638542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.743 [2024-07-21 18:23:21.738793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.743 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:03.744 18:23:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:05.120 18:23:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.120 00:08:05.120 real 0m1.436s 00:08:05.120 user 0m1.269s 00:08:05.120 sys 0m0.179s 00:08:05.120 18:23:22 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.120 18:23:22 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.120 ************************************ 00:08:05.120 END TEST accel_copy 00:08:05.120 ************************************ 00:08:05.120 18:23:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.120 18:23:22 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:05.120 18:23:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:05.120 18:23:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.120 18:23:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.120 ************************************ 00:08:05.120 START TEST accel_fill 00:08:05.120 ************************************ 00:08:05.120 18:23:23 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:05.120 [2024-07-21 18:23:23.033522] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:05.120 [2024-07-21 18:23:23.033603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811887 ] 00:08:05.120 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.120 [2024-07-21 18:23:23.154327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.120 [2024-07-21 18:23:23.253991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.120 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:05.121 18:23:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:06.493 18:23:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.493 00:08:06.493 real 0m1.457s 00:08:06.494 user 0m1.282s 00:08:06.494 sys 0m0.189s 00:08:06.494 18:23:24 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.494 18:23:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:06.494 ************************************ 00:08:06.494 END TEST accel_fill 00:08:06.494 ************************************ 00:08:06.494 18:23:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.494 18:23:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:06.494 18:23:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:06.494 18:23:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.494 18:23:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.494 ************************************ 00:08:06.494 START TEST accel_copy_crc32c 00:08:06.494 ************************************ 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:06.494 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:06.494 [2024-07-21 18:23:24.564502] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:06.494 [2024-07-21 18:23:24.564585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812081 ] 00:08:06.494 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.494 [2024-07-21 18:23:24.686586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.752 [2024-07-21 18:23:24.787444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.752 18:23:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.127 18:23:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.127 00:08:08.127 real 0m1.464s 00:08:08.127 user 0m1.280s 00:08:08.127 sys 0m0.198s 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.127 18:23:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 ************************************ 00:08:08.127 END TEST accel_copy_crc32c 00:08:08.127 ************************************ 00:08:08.127 18:23:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.127 18:23:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:08.127 18:23:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:08.127 18:23:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.127 18:23:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 ************************************ 00:08:08.127 START TEST accel_copy_crc32c_C2 00:08:08.127 ************************************ 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:08.127 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:08.127 [2024-07-21 18:23:26.105745] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:08.127 [2024-07-21 18:23:26.105833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812280 ] 00:08:08.127 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.127 [2024-07-21 18:23:26.227814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.127 [2024-07-21 18:23:26.332896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:08.385 18:23:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.761 00:08:09.761 real 0m1.470s 00:08:09.761 user 0m1.294s 00:08:09.761 sys 0m0.189s 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.761 18:23:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:09.761 ************************************ 00:08:09.761 END TEST accel_copy_crc32c_C2 00:08:09.761 ************************************ 00:08:09.761 18:23:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.761 18:23:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:09.761 18:23:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:09.761 18:23:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.761 18:23:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.761 ************************************ 00:08:09.761 START TEST accel_dualcast 00:08:09.761 ************************************ 00:08:09.761 18:23:27 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:09.761 [2024-07-21 18:23:27.652034] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:09.761 [2024-07-21 18:23:27.652116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812477 ] 00:08:09.761 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.761 [2024-07-21 18:23:27.773732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.761 [2024-07-21 18:23:27.873763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:09.761 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:09.762 18:23:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:11.140 18:23:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.140 00:08:11.140 real 0m1.458s 00:08:11.140 user 0m1.281s 00:08:11.140 sys 0m0.188s 00:08:11.140 18:23:29 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.140 18:23:29 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:11.140 ************************************ 00:08:11.140 END TEST accel_dualcast 00:08:11.140 ************************************ 00:08:11.140 18:23:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.140 18:23:29 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:11.140 18:23:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:11.140 18:23:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.140 18:23:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.140 ************************************ 00:08:11.140 START TEST accel_compare 00:08:11.140 ************************************ 00:08:11.140 18:23:29 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:11.140 18:23:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:11.140 [2024-07-21 18:23:29.182068] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:11.141 [2024-07-21 18:23:29.182143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812760 ] 00:08:11.141 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.141 [2024-07-21 18:23:29.298886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.400 [2024-07-21 18:23:29.395852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:11.400 18:23:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:12.779 18:23:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.780 18:23:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:12.780 18:23:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.780 00:08:12.780 real 0m1.432s 00:08:12.780 user 0m1.267s 00:08:12.780 sys 0m0.178s 00:08:12.780 18:23:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.780 18:23:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:12.780 ************************************ 00:08:12.780 END TEST accel_compare 00:08:12.780 ************************************ 00:08:12.780 18:23:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.780 18:23:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:12.780 18:23:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:12.780 18:23:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.780 18:23:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.780 ************************************ 00:08:12.780 START TEST accel_xor 00:08:12.780 ************************************ 00:08:12.780 18:23:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:12.780 [2024-07-21 18:23:30.700074] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:12.780 [2024-07-21 18:23:30.700155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813025 ] 00:08:12.780 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.780 [2024-07-21 18:23:30.820434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.780 [2024-07-21 18:23:30.919933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.780 18:23:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.164 00:08:14.164 real 0m1.445s 00:08:14.164 user 0m1.278s 00:08:14.164 sys 0m0.181s 00:08:14.164 18:23:32 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.164 18:23:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:14.164 ************************************ 00:08:14.164 END TEST accel_xor 00:08:14.164 ************************************ 00:08:14.164 18:23:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.164 18:23:32 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:14.164 18:23:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:14.164 18:23:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.164 18:23:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.164 ************************************ 00:08:14.164 START TEST accel_xor 00:08:14.164 ************************************ 00:08:14.164 18:23:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:14.164 18:23:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:14.164 [2024-07-21 18:23:32.228519] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:14.164 [2024-07-21 18:23:32.228610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813228 ] 00:08:14.164 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.164 [2024-07-21 18:23:32.351101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.422 [2024-07-21 18:23:32.456947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.422 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:14.423 18:23:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:15.795 18:23:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.795 00:08:15.795 real 0m1.468s 00:08:15.795 user 0m1.291s 00:08:15.795 sys 0m0.190s 00:08:15.795 18:23:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.795 18:23:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:15.796 ************************************ 00:08:15.796 END TEST accel_xor 00:08:15.796 ************************************ 00:08:15.796 18:23:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.796 18:23:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:15.796 18:23:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:15.796 18:23:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.796 18:23:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.796 ************************************ 00:08:15.796 START TEST accel_dif_verify 00:08:15.796 ************************************ 00:08:15.796 18:23:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:15.796 18:23:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:15.796 [2024-07-21 18:23:33.756532] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:15.796 [2024-07-21 18:23:33.756618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813419 ] 00:08:15.796 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.796 [2024-07-21 18:23:33.877858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.796 [2024-07-21 18:23:33.977545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.054 18:23:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:16.986 18:23:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.986 00:08:16.986 real 0m1.463s 00:08:16.986 user 0m1.293s 00:08:16.986 sys 0m0.183s 00:08:16.986 18:23:35 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.986 18:23:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.986 ************************************ 00:08:16.986 END TEST accel_dif_verify 00:08:16.986 ************************************ 00:08:17.244 18:23:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.244 18:23:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:17.244 18:23:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:17.244 18:23:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.244 18:23:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 ************************************ 00:08:17.244 START TEST accel_dif_generate 00:08:17.244 ************************************ 00:08:17.244 18:23:35 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:17.244 18:23:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:17.244 [2024-07-21 18:23:35.296755] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:17.244 [2024-07-21 18:23:35.296827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813621 ] 00:08:17.244 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.244 [2024-07-21 18:23:35.405937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.502 [2024-07-21 18:23:35.506095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.502 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.503 18:23:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.879 18:23:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:18.880 18:23:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.880 00:08:18.880 real 0m1.449s 00:08:18.880 user 0m1.283s 00:08:18.880 sys 0m0.179s 00:08:18.880 18:23:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.880 18:23:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:18.880 ************************************ 00:08:18.880 END TEST accel_dif_generate 00:08:18.880 ************************************ 00:08:18.880 18:23:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.880 18:23:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:18.880 18:23:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:18.880 18:23:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.880 18:23:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.880 ************************************ 00:08:18.880 START TEST accel_dif_generate_copy 00:08:18.880 ************************************ 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:18.880 18:23:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:18.880 [2024-07-21 18:23:36.810048] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:18.880 [2024-07-21 18:23:36.810091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813812 ] 00:08:18.880 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.880 [2024-07-21 18:23:36.912733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.880 [2024-07-21 18:23:37.018300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.880 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.881 18:23:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.258 00:08:20.258 real 0m1.435s 00:08:20.258 user 0m1.282s 00:08:20.258 sys 0m0.166s 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.258 18:23:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.258 ************************************ 00:08:20.258 END TEST accel_dif_generate_copy 00:08:20.258 ************************************ 00:08:20.258 18:23:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.258 18:23:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:20.258 18:23:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:20.258 18:23:38 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:20.258 18:23:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.258 18:23:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.258 ************************************ 00:08:20.258 START TEST accel_comp 00:08:20.258 ************************************ 00:08:20.258 18:23:38 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:20.258 18:23:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:20.258 18:23:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:20.258 18:23:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:20.258 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:20.259 18:23:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:20.259 [2024-07-21 18:23:38.322458] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:20.259 [2024-07-21 18:23:38.322503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814025 ] 00:08:20.259 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.259 [2024-07-21 18:23:38.425222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.518 [2024-07-21 18:23:38.528265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.518 18:23:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:21.893 18:23:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.893 00:08:21.893 real 0m1.429s 00:08:21.893 user 0m1.280s 00:08:21.893 sys 0m0.164s 00:08:21.893 18:23:39 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.893 18:23:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 ************************************ 00:08:21.893 END TEST accel_comp 00:08:21.893 ************************************ 00:08:21.893 18:23:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.893 18:23:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:21.893 18:23:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:21.893 18:23:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.893 18:23:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 ************************************ 00:08:21.893 START TEST accel_decomp 00:08:21.893 ************************************ 00:08:21.893 18:23:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:21.893 18:23:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:21.894 [2024-07-21 18:23:39.839094] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:21.894 [2024-07-21 18:23:39.839174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814306 ] 00:08:21.894 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.894 [2024-07-21 18:23:39.959305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.894 [2024-07-21 18:23:40.062208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.152 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:22.153 18:23:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:23.087 18:23:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.087 00:08:23.087 real 0m1.447s 00:08:23.087 user 0m1.282s 00:08:23.087 sys 0m0.178s 00:08:23.087 18:23:41 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.087 18:23:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:23.087 ************************************ 00:08:23.087 END TEST accel_decomp 00:08:23.087 ************************************ 00:08:23.346 18:23:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.346 18:23:41 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:23.346 18:23:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:23.346 18:23:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.346 18:23:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.346 ************************************ 00:08:23.346 START TEST accel_decomp_full 00:08:23.346 ************************************ 00:08:23.346 18:23:41 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:23.346 18:23:41 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:23.346 [2024-07-21 18:23:41.353898] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:23.346 [2024-07-21 18:23:41.353979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814566 ] 00:08:23.346 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.346 [2024-07-21 18:23:41.475035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.605 [2024-07-21 18:23:41.575885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:23.605 18:23:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.982 18:23:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.982 00:08:24.982 real 0m1.469s 00:08:24.982 user 0m1.281s 00:08:24.982 sys 0m0.201s 00:08:24.982 18:23:42 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.982 18:23:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:24.982 ************************************ 00:08:24.982 END TEST accel_decomp_full 00:08:24.982 ************************************ 00:08:24.982 18:23:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.982 18:23:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.982 18:23:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:24.982 18:23:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.982 18:23:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.982 ************************************ 00:08:24.982 START TEST accel_decomp_mcore 00:08:24.982 ************************************ 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:24.982 18:23:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:24.982 [2024-07-21 18:23:42.890989] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:24.982 [2024-07-21 18:23:42.891057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814760 ] 00:08:24.982 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.982 [2024-07-21 18:23:43.009837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.982 [2024-07-21 18:23:43.114204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.982 [2024-07-21 18:23:43.114291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.982 [2024-07-21 18:23:43.114336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.982 [2024-07-21 18:23:43.114338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.982 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.983 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.983 18:23:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.431 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.432 00:08:26.432 real 0m1.470s 00:08:26.432 user 0m4.683s 00:08:26.432 sys 0m0.199s 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.432 18:23:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:26.432 ************************************ 00:08:26.432 END TEST accel_decomp_mcore 00:08:26.432 ************************************ 00:08:26.432 18:23:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.432 18:23:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.432 18:23:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:26.432 18:23:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.432 18:23:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.432 ************************************ 00:08:26.432 START TEST accel_decomp_full_mcore 00:08:26.432 ************************************ 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:26.432 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:26.432 [2024-07-21 18:23:44.439302] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:26.432 [2024-07-21 18:23:44.439381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814963 ] 00:08:26.432 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.432 [2024-07-21 18:23:44.561486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.691 [2024-07-21 18:23:44.667067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.691 [2024-07-21 18:23:44.667154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.691 [2024-07-21 18:23:44.667265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.691 [2024-07-21 18:23:44.667272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.691 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.692 18:23:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.068 00:08:28.068 real 0m1.498s 00:08:28.068 user 0m4.750s 00:08:28.068 sys 0m0.209s 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.068 18:23:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:28.068 ************************************ 00:08:28.068 END TEST accel_decomp_full_mcore 00:08:28.068 ************************************ 00:08:28.068 18:23:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.068 18:23:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.068 18:23:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:28.068 18:23:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.068 18:23:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.068 ************************************ 00:08:28.068 START TEST accel_decomp_mthread 00:08:28.068 ************************************ 00:08:28.068 18:23:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.068 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:28.068 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:28.068 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.068 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:28.069 18:23:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:28.069 [2024-07-21 18:23:46.014338] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:28.069 [2024-07-21 18:23:46.014420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815161 ] 00:08:28.069 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.069 [2024-07-21 18:23:46.135589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.069 [2024-07-21 18:23:46.235871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.328 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.329 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.329 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.329 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.329 18:23:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.266 00:08:29.266 real 0m1.470s 00:08:29.266 user 0m1.290s 00:08:29.266 sys 0m0.192s 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.266 18:23:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:29.266 ************************************ 00:08:29.266 END TEST accel_decomp_mthread 00:08:29.266 ************************************ 00:08:29.525 18:23:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:29.525 18:23:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.525 18:23:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:29.525 18:23:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.525 18:23:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.525 ************************************ 00:08:29.525 START TEST accel_decomp_full_mthread 00:08:29.525 ************************************ 00:08:29.525 18:23:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.525 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:29.525 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:29.525 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.525 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.525 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:29.526 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:29.526 [2024-07-21 18:23:47.566731] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:29.526 [2024-07-21 18:23:47.566835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815361 ] 00:08:29.526 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.526 [2024-07-21 18:23:47.690726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.785 [2024-07-21 18:23:47.795669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.785 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.786 18:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.166 00:08:31.166 real 0m1.499s 00:08:31.166 user 0m1.319s 00:08:31.166 sys 0m0.195s 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.166 18:23:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 ************************************ 00:08:31.166 END TEST accel_decomp_full_mthread 00:08:31.166 ************************************ 00:08:31.166 18:23:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.166 18:23:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:31.166 18:23:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.166 18:23:49 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.166 18:23:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:31.166 18:23:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.166 18:23:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.166 18:23:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 18:23:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.166 18:23:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.166 18:23:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.166 18:23:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.166 18:23:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:31.166 18:23:49 accel -- accel/accel.sh@41 -- # jq -r . 00:08:31.166 ************************************ 00:08:31.166 START TEST accel_dif_functional_tests 00:08:31.166 ************************************ 00:08:31.166 18:23:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.166 [2024-07-21 18:23:49.130902] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:31.166 [2024-07-21 18:23:49.130945] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815569 ] 00:08:31.166 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.166 [2024-07-21 18:23:49.232748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.166 [2024-07-21 18:23:49.332589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.166 [2024-07-21 18:23:49.332678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.166 [2024-07-21 18:23:49.332673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.433 00:08:31.433 00:08:31.433 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.433 http://cunit.sourceforge.net/ 00:08:31.433 00:08:31.433 00:08:31.433 Suite: accel_dif 00:08:31.433 Test: verify: DIF generated, GUARD check ...passed 00:08:31.433 Test: verify: DIF generated, APPTAG check ...passed 00:08:31.433 Test: verify: DIF generated, REFTAG check ...passed 00:08:31.433 Test: verify: DIF not generated, GUARD check ...[2024-07-21 18:23:49.414186] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.433 passed 00:08:31.433 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 18:23:49.414274] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.433 passed 00:08:31.433 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 18:23:49.414312] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.433 passed 00:08:31.433 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:31.433 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 18:23:49.414382] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:31.433 passed 00:08:31.433 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:31.433 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:31.433 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:31.433 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 18:23:49.414522] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:31.433 passed 00:08:31.433 Test: verify copy: DIF generated, GUARD check ...passed 00:08:31.433 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:31.433 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:31.433 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 18:23:49.414676] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.433 passed 00:08:31.433 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 18:23:49.414716] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.433 passed 00:08:31.433 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-21 18:23:49.414753] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.433 passed 00:08:31.433 Test: generate copy: DIF generated, GUARD check ...passed 00:08:31.433 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:31.433 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:31.433 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:31.433 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:31.433 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:31.433 Test: generate copy: iovecs-len validate ...[2024-07-21 18:23:49.414991] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:31.433 passed 00:08:31.433 Test: generate copy: buffer alignment validate ...passed 00:08:31.433 00:08:31.433 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.433 suites 1 1 n/a 0 0 00:08:31.433 tests 26 26 26 0 0 00:08:31.433 asserts 115 115 115 0 n/a 00:08:31.433 00:08:31.433 Elapsed time = 0.003 seconds 00:08:31.433 00:08:31.433 real 0m0.498s 00:08:31.433 user 0m0.721s 00:08:31.433 sys 0m0.198s 00:08:31.433 18:23:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.433 18:23:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:31.433 ************************************ 00:08:31.433 END TEST accel_dif_functional_tests 00:08:31.433 ************************************ 00:08:31.692 18:23:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.692 00:08:31.692 real 0m34.198s 00:08:31.692 user 0m36.404s 00:08:31.692 sys 0m6.254s 00:08:31.692 18:23:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.692 18:23:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 END TEST accel 00:08:31.692 ************************************ 00:08:31.692 18:23:49 -- common/autotest_common.sh@1142 -- # return 0 00:08:31.692 18:23:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:31.692 18:23:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.692 18:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.692 18:23:49 -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 START TEST accel_rpc 00:08:31.692 ************************************ 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:31.692 * Looking for test storage... 00:08:31.692 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:31.692 18:23:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:31.692 18:23:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3815788 00:08:31.692 18:23:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3815788 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3815788 ']' 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.692 18:23:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.692 18:23:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 [2024-07-21 18:23:49.884287] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:31.693 [2024-07-21 18:23:49.884377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815788 ] 00:08:31.951 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.951 [2024-07-21 18:23:50.003201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.951 [2024-07-21 18:23:50.113307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.886 18:23:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.886 18:23:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:32.886 18:23:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:32.886 18:23:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:32.886 18:23:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:32.886 18:23:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:32.886 18:23:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:32.886 18:23:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.886 18:23:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.886 18:23:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 ************************************ 00:08:32.886 START TEST accel_assign_opcode 00:08:32.886 ************************************ 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 [2024-07-21 18:23:50.803523] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 [2024-07-21 18:23:50.811528] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.886 18:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.886 software 00:08:32.886 00:08:32.886 real 0m0.269s 00:08:32.886 user 0m0.043s 00:08:32.886 sys 0m0.011s 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.886 18:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.886 ************************************ 00:08:32.886 END TEST accel_assign_opcode 00:08:32.886 ************************************ 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:33.146 18:23:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3815788 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3815788 ']' 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3815788 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3815788 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3815788' 00:08:33.146 killing process with pid 3815788 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@967 -- # kill 3815788 00:08:33.146 18:23:51 accel_rpc -- common/autotest_common.sh@972 -- # wait 3815788 00:08:33.405 00:08:33.405 real 0m1.786s 00:08:33.405 user 0m1.807s 00:08:33.405 sys 0m0.558s 00:08:33.405 18:23:51 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.405 18:23:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.405 ************************************ 00:08:33.405 END TEST accel_rpc 00:08:33.405 ************************************ 00:08:33.405 18:23:51 -- common/autotest_common.sh@1142 -- # return 0 00:08:33.406 18:23:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:33.406 18:23:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:33.406 18:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.406 18:23:51 -- common/autotest_common.sh@10 -- # set +x 00:08:33.406 ************************************ 00:08:33.406 START TEST app_cmdline 00:08:33.406 ************************************ 00:08:33.406 18:23:51 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:33.665 * Looking for test storage... 00:08:33.665 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:33.665 18:23:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:33.665 18:23:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3816047 00:08:33.665 18:23:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3816047 00:08:33.665 18:23:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:33.665 18:23:51 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3816047 ']' 00:08:33.665 18:23:51 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.665 18:23:51 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.665 18:23:51 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.665 18:23:51 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.665 18:23:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:33.665 [2024-07-21 18:23:51.747701] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:33.665 [2024-07-21 18:23:51.747770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816047 ] 00:08:33.665 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.665 [2024-07-21 18:23:51.861106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.925 [2024-07-21 18:23:51.960875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.863 18:23:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.863 18:23:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:34.863 { 00:08:34.863 "version": "SPDK v24.09-pre git sha1 89fd17309", 00:08:34.863 "fields": { 00:08:34.863 "major": 24, 00:08:34.863 "minor": 9, 00:08:34.863 "patch": 0, 00:08:34.863 "suffix": "-pre", 00:08:34.863 "commit": "89fd17309" 00:08:34.863 } 00:08:34.863 } 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:34.863 18:23:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.863 18:23:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:34.863 18:23:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.863 18:23:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.863 18:23:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:34.863 18:23:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:34.863 18:23:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.863 18:23:53 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.125 request: 00:08:35.125 { 00:08:35.125 "method": "env_dpdk_get_mem_stats", 00:08:35.125 "req_id": 1 00:08:35.125 } 00:08:35.125 Got JSON-RPC error response 00:08:35.125 response: 00:08:35.125 { 00:08:35.125 "code": -32601, 00:08:35.125 "message": "Method not found" 00:08:35.125 } 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.125 18:23:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3816047 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3816047 ']' 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3816047 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3816047 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3816047' 00:08:35.125 killing process with pid 3816047 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@967 -- # kill 3816047 00:08:35.125 18:23:53 app_cmdline -- common/autotest_common.sh@972 -- # wait 3816047 00:08:35.692 00:08:35.692 real 0m2.069s 00:08:35.692 user 0m2.529s 00:08:35.692 sys 0m0.592s 00:08:35.692 18:23:53 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.692 18:23:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.692 ************************************ 00:08:35.692 END TEST app_cmdline 00:08:35.692 ************************************ 00:08:35.692 18:23:53 -- common/autotest_common.sh@1142 -- # return 0 00:08:35.692 18:23:53 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:35.692 18:23:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.692 18:23:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.692 18:23:53 -- common/autotest_common.sh@10 -- # set +x 00:08:35.692 ************************************ 00:08:35.692 START TEST version 00:08:35.692 ************************************ 00:08:35.692 18:23:53 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:35.692 * Looking for test storage... 00:08:35.692 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:35.692 18:23:53 version -- app/version.sh@17 -- # get_header_version major 00:08:35.692 18:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:35.692 18:23:53 version -- app/version.sh@14 -- # cut -f2 00:08:35.692 18:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.692 18:23:53 version -- app/version.sh@17 -- # major=24 00:08:35.693 18:23:53 version -- app/version.sh@18 -- # get_header_version minor 00:08:35.693 18:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:35.693 18:23:53 version -- app/version.sh@14 -- # cut -f2 00:08:35.693 18:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.693 18:23:53 version -- app/version.sh@18 -- # minor=9 00:08:35.693 18:23:53 version -- app/version.sh@19 -- # get_header_version patch 00:08:35.693 18:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:35.693 18:23:53 version -- app/version.sh@14 -- # cut -f2 00:08:35.693 18:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.951 18:23:53 version -- app/version.sh@19 -- # patch=0 00:08:35.951 18:23:53 version -- app/version.sh@20 -- # get_header_version suffix 00:08:35.951 18:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:35.951 18:23:53 version -- app/version.sh@14 -- # cut -f2 00:08:35.951 18:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.951 18:23:53 version -- app/version.sh@20 -- # suffix=-pre 00:08:35.951 18:23:53 version -- app/version.sh@22 -- # version=24.9 00:08:35.951 18:23:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:35.951 18:23:53 version -- app/version.sh@28 -- # version=24.9rc0 00:08:35.951 18:23:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:35.951 18:23:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:35.951 18:23:53 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:35.951 18:23:53 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:35.951 00:08:35.951 real 0m0.190s 00:08:35.951 user 0m0.091s 00:08:35.951 sys 0m0.148s 00:08:35.951 18:23:53 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.951 18:23:53 version -- common/autotest_common.sh@10 -- # set +x 00:08:35.952 ************************************ 00:08:35.952 END TEST version 00:08:35.952 ************************************ 00:08:35.952 18:23:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:35.952 18:23:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@198 -- # uname -s 00:08:35.952 18:23:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:35.952 18:23:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:35.952 18:23:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:35.952 18:23:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:35.952 18:23:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.952 18:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:35.952 18:23:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:08:35.952 18:23:54 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:08:35.952 18:23:54 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:08:35.952 18:23:54 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:08:35.952 18:23:54 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:35.952 18:23:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.952 18:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.952 18:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:35.952 ************************************ 00:08:35.952 START TEST llvm_fuzz 00:08:35.952 ************************************ 00:08:35.952 18:23:54 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:36.211 * Looking for test storage... 00:08:36.211 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:36.211 18:23:54 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:08:36.211 18:23:54 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:08:36.211 18:23:54 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:08:36.211 18:23:54 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:36.211 18:23:54 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:36.211 18:23:54 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:36.211 18:23:54 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:36.212 18:23:54 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:36.212 18:23:54 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:36.212 18:23:54 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:36.212 18:23:54 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:36.212 18:23:54 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:36.212 18:23:54 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.212 18:23:54 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.212 18:23:54 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:36.212 ************************************ 00:08:36.212 START TEST nvmf_llvm_fuzz 00:08:36.212 ************************************ 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:36.212 * Looking for test storage... 00:08:36.212 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:36.212 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:36.212 #define SPDK_CONFIG_H 00:08:36.212 #define SPDK_CONFIG_APPS 1 00:08:36.212 #define SPDK_CONFIG_ARCH native 00:08:36.213 #undef SPDK_CONFIG_ASAN 00:08:36.213 #undef SPDK_CONFIG_AVAHI 00:08:36.213 #undef SPDK_CONFIG_CET 00:08:36.213 #define SPDK_CONFIG_COVERAGE 1 00:08:36.213 #define SPDK_CONFIG_CROSS_PREFIX 00:08:36.213 #undef SPDK_CONFIG_CRYPTO 00:08:36.213 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:36.213 #undef SPDK_CONFIG_CUSTOMOCF 00:08:36.213 #undef SPDK_CONFIG_DAOS 00:08:36.213 #define SPDK_CONFIG_DAOS_DIR 00:08:36.213 #define SPDK_CONFIG_DEBUG 1 00:08:36.213 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:36.213 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:36.213 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:36.213 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:36.213 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:36.213 #undef SPDK_CONFIG_DPDK_UADK 00:08:36.213 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:36.213 #define SPDK_CONFIG_EXAMPLES 1 00:08:36.213 #undef SPDK_CONFIG_FC 00:08:36.213 #define SPDK_CONFIG_FC_PATH 00:08:36.213 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:36.213 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:36.213 #undef SPDK_CONFIG_FUSE 00:08:36.213 #define SPDK_CONFIG_FUZZER 1 00:08:36.213 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:36.213 #undef SPDK_CONFIG_GOLANG 00:08:36.213 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:36.213 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:36.213 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:36.213 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:36.213 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:36.213 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:36.213 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:36.213 #define SPDK_CONFIG_IDXD 1 00:08:36.213 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:36.213 #undef SPDK_CONFIG_IPSEC_MB 00:08:36.213 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:36.213 #define SPDK_CONFIG_ISAL 1 00:08:36.213 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:36.213 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:36.213 #define SPDK_CONFIG_LIBDIR 00:08:36.213 #undef SPDK_CONFIG_LTO 00:08:36.213 #define SPDK_CONFIG_MAX_LCORES 128 00:08:36.213 #define SPDK_CONFIG_NVME_CUSE 1 00:08:36.213 #undef SPDK_CONFIG_OCF 00:08:36.213 #define SPDK_CONFIG_OCF_PATH 00:08:36.213 #define SPDK_CONFIG_OPENSSL_PATH 00:08:36.213 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:36.213 #define SPDK_CONFIG_PGO_DIR 00:08:36.213 #undef SPDK_CONFIG_PGO_USE 00:08:36.213 #define SPDK_CONFIG_PREFIX /usr/local 00:08:36.213 #undef SPDK_CONFIG_RAID5F 00:08:36.213 #undef SPDK_CONFIG_RBD 00:08:36.213 #define SPDK_CONFIG_RDMA 1 00:08:36.213 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:36.213 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:36.213 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:36.213 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:36.213 #undef SPDK_CONFIG_SHARED 00:08:36.213 #undef SPDK_CONFIG_SMA 00:08:36.213 #define SPDK_CONFIG_TESTS 1 00:08:36.213 #undef SPDK_CONFIG_TSAN 00:08:36.213 #define SPDK_CONFIG_UBLK 1 00:08:36.213 #define SPDK_CONFIG_UBSAN 1 00:08:36.213 #undef SPDK_CONFIG_UNIT_TESTS 00:08:36.213 #undef SPDK_CONFIG_URING 00:08:36.213 #define SPDK_CONFIG_URING_PATH 00:08:36.213 #undef SPDK_CONFIG_URING_ZNS 00:08:36.213 #undef SPDK_CONFIG_USDT 00:08:36.213 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:36.213 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:36.213 #define SPDK_CONFIG_VFIO_USER 1 00:08:36.213 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:36.213 #define SPDK_CONFIG_VHOST 1 00:08:36.213 #define SPDK_CONFIG_VIRTIO 1 00:08:36.213 #undef SPDK_CONFIG_VTUNE 00:08:36.213 #define SPDK_CONFIG_VTUNE_DIR 00:08:36.213 #define SPDK_CONFIG_WERROR 1 00:08:36.213 #define SPDK_CONFIG_WPDK_DIR 00:08:36.213 #undef SPDK_CONFIG_XNVME 00:08:36.213 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:36.213 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:36.473 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:36.474 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3816556 ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 3816556 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Hm2ZLX 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Hm2ZLX/tests/nvmf /tmp/spdk.Hm2ZLX 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=86161055744 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508572672 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=8347516928 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895630336 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6086656 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253045248 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1241088 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:36.475 * Looking for test storage... 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=86161055744 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10562109440 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:36.475 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:36.475 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:36.476 18:23:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:36.476 [2024-07-21 18:23:54.567517] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:36.476 [2024-07-21 18:23:54.567597] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816616 ] 00:08:36.476 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.735 [2024-07-21 18:23:54.819422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.735 [2024-07-21 18:23:54.907599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.994 [2024-07-21 18:23:54.971804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.994 [2024-07-21 18:23:54.988046] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:36.994 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.994 INFO: Seed: 3529416056 00:08:36.994 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:36.994 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:36.994 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:36.994 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.994 #2 INITED exec/s: 0 rss: 65Mb 00:08:36.994 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.994 This may also happen if the target rejected all inputs we tried so far 00:08:36.994 [2024-07-21 18:23:55.043553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:36.994 [2024-07-21 18:23:55.043589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.561 NEW_FUNC[1/696]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:37.561 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:37.561 #3 NEW cov: 11868 ft: 11867 corp: 2/103b lim: 320 exec/s: 0 rss: 73Mb L: 102/102 MS: 1 InsertRepeatedBytes- 00:08:37.561 [2024-07-21 18:23:55.534803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (40) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.561 [2024-07-21 18:23:55.534852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.561 #8 NEW cov: 12001 ft: 12363 corp: 3/186b lim: 320 exec/s: 0 rss: 73Mb L: 83/102 MS: 5 ChangeByte-CopyPart-EraseBytes-InsertByte-CrossOver- 00:08:37.561 [2024-07-21 18:23:55.584876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:4 nsid:76767676 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7676767676767676 00:08:37.561 [2024-07-21 18:23:55.584913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.561 NEW_FUNC[1/1]: 0x17c6c40 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:08:37.561 #15 NEW cov: 12028 ft: 12505 corp: 4/279b lim: 320 exec/s: 0 rss: 73Mb L: 93/102 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:37.561 [2024-07-21 18:23:55.635259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.561 [2024-07-21 18:23:55.635293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.561 [2024-07-21 18:23:55.635356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.561 [2024-07-21 18:23:55.635376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.561 [2024-07-21 18:23:55.635436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.561 [2024-07-21 18:23:55.635454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.561 #26 NEW cov: 12113 ft: 13046 corp: 5/483b lim: 320 exec/s: 0 rss: 73Mb L: 204/204 MS: 1 CrossOver- 00:08:37.561 [2024-07-21 18:23:55.705116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.561 [2024-07-21 18:23:55.705149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.561 #27 NEW cov: 12113 ft: 13097 corp: 6/585b lim: 320 exec/s: 0 rss: 73Mb L: 102/204 MS: 1 CrossOver- 00:08:37.561 [2024-07-21 18:23:55.755348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:fff60000 cdw10:00000000 cdw11:00000000 00:08:37.561 [2024-07-21 18:23:55.755382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.820 #29 NEW cov: 12113 ft: 13281 corp: 7/651b lim: 320 exec/s: 0 rss: 73Mb L: 66/204 MS: 2 EraseBytes-CMP- DE: "\366\377\377\377"- 00:08:37.820 [2024-07-21 18:23:55.825495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.820 [2024-07-21 18:23:55.825529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.820 #30 NEW cov: 12113 ft: 13327 corp: 8/753b lim: 320 exec/s: 0 rss: 73Mb L: 102/204 MS: 1 ShuffleBytes- 00:08:37.820 [2024-07-21 18:23:55.875856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:c8c8c8c8 cdw10:c8c8c8c8 cdw11:c8c8c8c8 00:08:37.820 [2024-07-21 18:23:55.875890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.820 [2024-07-21 18:23:55.875959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c8) qid:0 cid:5 nsid:c8c8c8c8 cdw10:c8c8c8c8 cdw11:c8c8c8c8 00:08:37.820 [2024-07-21 18:23:55.875978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.820 [2024-07-21 18:23:55.876039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:37.820 [2024-07-21 18:23:55.876058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.820 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:37.820 #31 NEW cov: 12136 ft: 13342 corp: 9/969b lim: 320 exec/s: 0 rss: 73Mb L: 216/216 MS: 1 InsertRepeatedBytes- 00:08:37.820 [2024-07-21 18:23:55.945856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:4 nsid:76767676 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7676767676767676 00:08:37.820 [2024-07-21 18:23:55.945891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.820 #37 NEW cov: 12136 ft: 13378 corp: 10/1062b lim: 320 exec/s: 0 rss: 74Mb L: 93/216 MS: 1 ChangeBit- 00:08:37.820 [2024-07-21 18:23:56.016052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:fff60000 cdw10:00000000 cdw11:00000000 00:08:37.820 [2024-07-21 18:23:56.016086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.079 #38 NEW cov: 12136 ft: 13433 corp: 11/1128b lim: 320 exec/s: 38 rss: 74Mb L: 66/216 MS: 1 ChangeBit- 00:08:38.079 [2024-07-21 18:23:56.086370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.079 [2024-07-21 18:23:56.086404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.079 [2024-07-21 18:23:56.086467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.079 [2024-07-21 18:23:56.086487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.079 #39 NEW cov: 12136 ft: 13642 corp: 12/1257b lim: 320 exec/s: 39 rss: 74Mb L: 129/216 MS: 1 EraseBytes- 00:08:38.079 [2024-07-21 18:23:56.156577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:4 nsid:76767676 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x76 00:08:38.079 [2024-07-21 18:23:56.156610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.079 [2024-07-21 18:23:56.156674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 00:08:38.079 [2024-07-21 18:23:56.156693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.079 #40 NEW cov: 12136 ft: 13749 corp: 13/1427b lim: 320 exec/s: 40 rss: 74Mb L: 170/216 MS: 1 CrossOver- 00:08:38.079 [2024-07-21 18:23:56.206657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ad) qid:0 cid:4 nsid:adadadad cdw10:adadadad cdw11:adadadad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.079 [2024-07-21 18:23:56.206691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.079 NEW_FUNC[1/1]: 0x17c77a0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:08:38.079 #42 NEW cov: 12149 ft: 14058 corp: 14/1535b lim: 320 exec/s: 42 rss: 74Mb L: 108/216 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:38.079 [2024-07-21 18:23:56.257002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:4 nsid:76767676 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x76 00:08:38.079 [2024-07-21 18:23:56.257035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.079 [2024-07-21 18:23:56.257097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 00:08:38.079 [2024-07-21 18:23:56.257116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.079 [2024-07-21 18:23:56.257185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:6 nsid:76767676 cdw10:ffffffff cdw11:7676ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:38.079 [2024-07-21 18:23:56.257204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.338 #43 NEW cov: 12149 ft: 14192 corp: 15/1739b lim: 320 exec/s: 43 rss: 74Mb L: 204/216 MS: 1 InsertRepeatedBytes- 00:08:38.338 [2024-07-21 18:23:56.326951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (40) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002f0000 00:08:38.338 [2024-07-21 18:23:56.326984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.338 #44 NEW cov: 12149 ft: 14254 corp: 16/1823b lim: 320 exec/s: 44 rss: 74Mb L: 84/216 MS: 1 InsertByte- 00:08:38.338 [2024-07-21 18:23:56.397153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:76767676 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7676767676767676 00:08:38.338 [2024-07-21 18:23:56.397187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.338 NEW_FUNC[1/1]: 0x1398480 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2093 00:08:38.338 #48 NEW cov: 12180 ft: 14392 corp: 17/1908b lim: 320 exec/s: 48 rss: 74Mb L: 85/216 MS: 4 ShuffleBytes-ChangeByte-InsertByte-CrossOver- 00:08:38.338 [2024-07-21 18:23:56.447375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:f600005b cdw10:ffffffff cdw11:ffffffff 00:08:38.338 [2024-07-21 18:23:56.447409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.338 [2024-07-21 18:23:56.447483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:38.338 [2024-07-21 18:23:56.447507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.338 #52 NEW cov: 12180 ft: 14423 corp: 18/2062b lim: 320 exec/s: 52 rss: 74Mb L: 154/216 MS: 4 EraseBytes-EraseBytes-InsertByte-InsertRepeatedBytes- 00:08:38.338 [2024-07-21 18:23:56.517754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:4 nsid:fff67676 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7676767676 00:08:38.338 [2024-07-21 18:23:56.517787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.338 [2024-07-21 18:23:56.517849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:76760000 cdw11:76767676 00:08:38.338 [2024-07-21 18:23:56.517868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.338 [2024-07-21 18:23:56.517938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:6 nsid:76767676 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:38.338 [2024-07-21 18:23:56.517957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.597 #53 NEW cov: 12180 ft: 14436 corp: 19/2270b lim: 320 exec/s: 53 rss: 74Mb L: 208/216 MS: 1 PersAutoDict- DE: "\366\377\377\377"- 00:08:38.597 [2024-07-21 18:23:56.587685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (40) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002f0000 00:08:38.597 [2024-07-21 18:23:56.587718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.597 #54 NEW cov: 12180 ft: 14454 corp: 20/2354b lim: 320 exec/s: 54 rss: 74Mb L: 84/216 MS: 1 ChangeByte- 00:08:38.597 [2024-07-21 18:23:56.657873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.597 [2024-07-21 18:23:56.657906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.597 #55 NEW cov: 12180 ft: 14462 corp: 21/2460b lim: 320 exec/s: 55 rss: 74Mb L: 106/216 MS: 1 CMP- DE: "\364\377\377\377"- 00:08:38.597 [2024-07-21 18:23:56.708243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (40) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002f0000 00:08:38.597 [2024-07-21 18:23:56.708276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.597 [2024-07-21 18:23:56.708338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.597 [2024-07-21 18:23:56.708357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.597 [2024-07-21 18:23:56.708417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000a0000 cdw11:00000000 00:08:38.597 [2024-07-21 18:23:56.708436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.597 #56 NEW cov: 12180 ft: 14486 corp: 22/2660b lim: 320 exec/s: 56 rss: 74Mb L: 200/216 MS: 1 CrossOver- 00:08:38.597 [2024-07-21 18:23:56.778207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ffff cdw10:00000000 cdw11:00000000 00:08:38.597 [2024-07-21 18:23:56.778246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.856 #57 NEW cov: 12180 ft: 14499 corp: 23/2770b lim: 320 exec/s: 57 rss: 74Mb L: 110/216 MS: 1 PersAutoDict- DE: "\364\377\377\377"- 00:08:38.856 [2024-07-21 18:23:56.848394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.856 [2024-07-21 18:23:56.848427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.856 #58 NEW cov: 12180 ft: 14504 corp: 24/2872b lim: 320 exec/s: 58 rss: 74Mb L: 102/216 MS: 1 ChangeBinInt- 00:08:38.856 [2024-07-21 18:23:56.898823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (76) qid:0 cid:4 nsid:fff67676 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7676767676 00:08:38.856 [2024-07-21 18:23:56.898856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.856 [2024-07-21 18:23:56.898920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 00:08:38.856 [2024-07-21 18:23:56.898939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.856 [2024-07-21 18:23:56.899000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.856 [2024-07-21 18:23:56.899018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.856 #59 NEW cov: 12180 ft: 14569 corp: 25/3080b lim: 320 exec/s: 59 rss: 75Mb L: 208/216 MS: 1 CopyPart- 00:08:38.856 [2024-07-21 18:23:56.968740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.856 [2024-07-21 18:23:56.968774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.856 #60 NEW cov: 12180 ft: 14625 corp: 26/3182b lim: 320 exec/s: 60 rss: 75Mb L: 102/216 MS: 1 CMP- DE: "\006\000\000\000"- 00:08:38.856 [2024-07-21 18:23:57.019076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.856 [2024-07-21 18:23:57.019110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.856 [2024-07-21 18:23:57.019172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.856 [2024-07-21 18:23:57.019191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.856 [2024-07-21 18:23:57.019260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.856 [2024-07-21 18:23:57.019279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.856 #61 NEW cov: 12180 ft: 14627 corp: 27/3386b lim: 320 exec/s: 30 rss: 75Mb L: 204/216 MS: 1 ChangeBit- 00:08:38.856 #61 DONE cov: 12180 ft: 14627 corp: 27/3386b lim: 320 exec/s: 30 rss: 75Mb 00:08:38.856 ###### Recommended dictionary. ###### 00:08:38.856 "\366\377\377\377" # Uses: 1 00:08:38.856 "\364\377\377\377" # Uses: 1 00:08:38.856 "\006\000\000\000" # Uses: 0 00:08:38.856 ###### End of recommended dictionary. ###### 00:08:38.856 Done 61 runs in 2 second(s) 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:39.115 18:23:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:39.115 [2024-07-21 18:23:57.242551] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:39.116 [2024-07-21 18:23:57.242625] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816957 ] 00:08:39.116 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.374 [2024-07-21 18:23:57.498547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.374 [2024-07-21 18:23:57.587815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.632 [2024-07-21 18:23:57.651928] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.632 [2024-07-21 18:23:57.668171] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:39.632 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.632 INFO: Seed: 1914455632 00:08:39.632 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:39.632 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:39.633 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:39.633 INFO: A corpus is not provided, starting from an empty corpus 00:08:39.633 #2 INITED exec/s: 0 rss: 65Mb 00:08:39.633 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:39.633 This may also happen if the target rejected all inputs we tried so far 00:08:39.633 [2024-07-21 18:23:57.723579] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:39.633 [2024-07-21 18:23:57.723724] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:39.633 [2024-07-21 18:23:57.723856] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:39.633 [2024-07-21 18:23:57.724114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.633 [2024-07-21 18:23:57.724155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.633 [2024-07-21 18:23:57.724230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.633 [2024-07-21 18:23:57.724250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.633 [2024-07-21 18:23:57.724319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.633 [2024-07-21 18:23:57.724338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.199 NEW_FUNC[1/698]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:40.199 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:40.199 #5 NEW cov: 11968 ft: 11969 corp: 2/24b lim: 30 exec/s: 0 rss: 72Mb L: 23/23 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:40.199 [2024-07-21 18:23:58.214865] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.215031] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.215166] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.215425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.215469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.215539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.215559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.215627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.215646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.199 #6 NEW cov: 12098 ft: 12628 corp: 3/42b lim: 30 exec/s: 0 rss: 72Mb L: 18/23 MS: 1 EraseBytes- 00:08:40.199 [2024-07-21 18:23:58.285059] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.285204] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.285341] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.285475] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (222052) > buf size (4096) 00:08:40.199 [2024-07-21 18:23:58.285724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.285759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.285830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.285850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.285917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.285937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.286003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d8d800d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.286023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.199 #7 NEW cov: 12127 ft: 13261 corp: 4/67b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:40.199 [2024-07-21 18:23:58.355228] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.355370] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.355502] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.199 [2024-07-21 18:23:58.355640] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (222052) > buf size (4096) 00:08:40.199 [2024-07-21 18:23:58.355915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.355952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.356022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.356042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.356111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.356131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.199 [2024-07-21 18:23:58.356200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d8d800d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.199 [2024-07-21 18:23:58.356227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.199 #8 NEW cov: 12212 ft: 13522 corp: 5/92b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeByte- 00:08:40.457 [2024-07-21 18:23:58.425446] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.425591] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.425724] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.425856] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.426125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.426160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.426233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.426254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.426323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.426343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.426411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.426430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.457 #14 NEW cov: 12212 ft: 13637 corp: 6/119b lim: 30 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 CopyPart- 00:08:40.457 [2024-07-21 18:23:58.475640] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.475782] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.475918] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.476048] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (222052) > buf size (4096) 00:08:40.457 [2024-07-21 18:23:58.476305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.476343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.476414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.476433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.476501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.476520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.476587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d8d800d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.476606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.457 #15 NEW cov: 12212 ft: 13757 corp: 7/144b lim: 30 exec/s: 0 rss: 73Mb L: 25/27 MS: 1 ShuffleBytes- 00:08:40.457 [2024-07-21 18:23:58.545753] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.545903] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.546035] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.457 [2024-07-21 18:23:58.546168] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:40.457 [2024-07-21 18:23:58.546440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.546475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.546543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.546564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.546631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.546650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.457 [2024-07-21 18:23:58.546718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.457 [2024-07-21 18:23:58.546738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.458 #16 NEW cov: 12212 ft: 13806 corp: 8/169b lim: 30 exec/s: 0 rss: 73Mb L: 25/27 MS: 1 CopyPart- 00:08:40.458 [2024-07-21 18:23:58.595877] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.458 [2024-07-21 18:23:58.596027] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.458 [2024-07-21 18:23:58.596158] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.458 [2024-07-21 18:23:58.596420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.458 [2024-07-21 18:23:58.596454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.458 [2024-07-21 18:23:58.596525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff3b83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.458 [2024-07-21 18:23:58.596548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.458 [2024-07-21 18:23:58.596615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.458 [2024-07-21 18:23:58.596635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.458 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:40.458 #17 NEW cov: 12235 ft: 13871 corp: 9/192b lim: 30 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 ChangeByte- 00:08:40.458 [2024-07-21 18:23:58.646015] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:08:40.458 [2024-07-21 18:23:58.646157] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:08:40.458 [2024-07-21 18:23:58.646295] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.458 [2024-07-21 18:23:58.646566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.458 [2024-07-21 18:23:58.646600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.458 [2024-07-21 18:23:58.646670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.458 [2024-07-21 18:23:58.646690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.458 [2024-07-21 18:23:58.646757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.458 [2024-07-21 18:23:58.646777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.716 #18 NEW cov: 12235 ft: 13960 corp: 10/215b lim: 30 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:08:40.716 [2024-07-21 18:23:58.696163] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:08:40.716 [2024-07-21 18:23:58.696313] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:08:40.716 [2024-07-21 18:23:58.696441] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bfff 00:08:40.716 [2024-07-21 18:23:58.696691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.696725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.696795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.696816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.696884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.696903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.716 #19 NEW cov: 12235 ft: 13998 corp: 11/238b lim: 30 exec/s: 19 rss: 73Mb L: 23/27 MS: 1 ChangeBit- 00:08:40.716 [2024-07-21 18:23:58.766319] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.766465] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:40.716 [2024-07-21 18:23:58.766717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.766756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.766826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.766847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.716 #20 NEW cov: 12235 ft: 14296 corp: 12/251b lim: 30 exec/s: 20 rss: 73Mb L: 13/27 MS: 1 EraseBytes- 00:08:40.716 [2024-07-21 18:23:58.836610] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.836752] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.836887] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003636 00:08:40.716 [2024-07-21 18:23:58.837019] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.837149] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.837419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.837454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.837523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.837544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.837610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:36360236 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.837629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.837694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:36ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.837714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.837779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.837798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:40.716 #21 NEW cov: 12235 ft: 14366 corp: 13/281b lim: 30 exec/s: 21 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:40.716 [2024-07-21 18:23:58.886699] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.886842] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.716 [2024-07-21 18:23:58.886977] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261988) > buf size (4096) 00:08:40.716 [2024-07-21 18:23:58.887228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.887262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.887332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.887353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.716 [2024-07-21 18:23:58.887425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffd800d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.716 [2024-07-21 18:23:58.887445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.716 #22 NEW cov: 12235 ft: 14380 corp: 14/299b lim: 30 exec/s: 22 rss: 73Mb L: 18/30 MS: 1 EraseBytes- 00:08:40.974 [2024-07-21 18:23:58.936817] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (535552) > buf size (4096) 00:08:40.974 [2024-07-21 18:23:58.936958] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.974 [2024-07-21 18:23:58.937090] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.974 [2024-07-21 18:23:58.937360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.974 [2024-07-21 18:23:58.937396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.974 [2024-07-21 18:23:58.937462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.974 [2024-07-21 18:23:58.937483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.974 [2024-07-21 18:23:58.937551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.974 [2024-07-21 18:23:58.937570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.974 #23 NEW cov: 12235 ft: 14398 corp: 15/317b lim: 30 exec/s: 23 rss: 73Mb L: 18/30 MS: 1 ChangeBinInt- 00:08:40.974 [2024-07-21 18:23:58.986971] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.974 [2024-07-21 18:23:58.987115] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.974 [2024-07-21 18:23:58.987261] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.974 [2024-07-21 18:23:58.987522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.974 [2024-07-21 18:23:58.987556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.974 [2024-07-21 18:23:58.987624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.974 [2024-07-21 18:23:58.987644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.974 [2024-07-21 18:23:58.987713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.974 [2024-07-21 18:23:58.987732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.974 #24 NEW cov: 12235 ft: 14409 corp: 16/340b lim: 30 exec/s: 24 rss: 73Mb L: 23/30 MS: 1 ChangeBit- 00:08:40.975 [2024-07-21 18:23:59.037220] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.037364] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.037494] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.037626] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:40.975 [2024-07-21 18:23:59.037764] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (423544) > buf size (4096) 00:08:40.975 [2024-07-21 18:23:59.038025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.038062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.038132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.038152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.038223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.038243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.038312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.038331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.038399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:9d9d819d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.038418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:40.975 #25 NEW cov: 12235 ft: 14477 corp: 17/370b lim: 30 exec/s: 25 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:40.975 [2024-07-21 18:23:59.097330] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.097477] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.097611] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.097742] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (222052) > buf size (4096) 00:08:40.975 [2024-07-21 18:23:59.097992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.098026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.098094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.098114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.098181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.098201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.098273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d8d800d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.098294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.975 #26 NEW cov: 12235 ft: 14492 corp: 18/395b lim: 30 exec/s: 26 rss: 73Mb L: 25/30 MS: 1 ShuffleBytes- 00:08:40.975 [2024-07-21 18:23:59.147551] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.147692] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.147818] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000036ff 00:08:40.975 [2024-07-21 18:23:59.147946] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.148084] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:40.975 [2024-07-21 18:23:59.148347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.148381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.148450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.148470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.148536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:36360236 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.148556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.148623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.148643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.975 [2024-07-21 18:23:59.148711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.975 [2024-07-21 18:23:59.148730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.234 #27 NEW cov: 12235 ft: 14564 corp: 19/425b lim: 30 exec/s: 27 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:08:41.234 [2024-07-21 18:23:59.217658] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.234 [2024-07-21 18:23:59.217799] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.234 [2024-07-21 18:23:59.217932] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.234 [2024-07-21 18:23:59.218185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.234 [2024-07-21 18:23:59.218223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.234 [2024-07-21 18:23:59.218296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.234 [2024-07-21 18:23:59.218316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.234 [2024-07-21 18:23:59.218387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.234 [2024-07-21 18:23:59.218406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.234 #28 NEW cov: 12235 ft: 14585 corp: 20/443b lim: 30 exec/s: 28 rss: 73Mb L: 18/30 MS: 1 CopyPart- 00:08:41.234 [2024-07-21 18:23:59.267803] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.234 [2024-07-21 18:23:59.267945] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:08:41.234 [2024-07-21 18:23:59.268079] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.234 [2024-07-21 18:23:59.268345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.234 [2024-07-21 18:23:59.268378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.234 [2024-07-21 18:23:59.268451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.234 [2024-07-21 18:23:59.268470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.268536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.268556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.235 #29 NEW cov: 12235 ft: 14627 corp: 21/463b lim: 30 exec/s: 29 rss: 73Mb L: 20/30 MS: 1 CMP- DE: "\000\000"- 00:08:41.235 [2024-07-21 18:23:59.317986] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.235 [2024-07-21 18:23:59.318126] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff36 00:08:41.235 [2024-07-21 18:23:59.318265] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.235 [2024-07-21 18:23:59.318394] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (222052) > buf size (4096) 00:08:41.235 [2024-07-21 18:23:59.318649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.318682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.318749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.318769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.318839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:36368336 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.318858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.318926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d8d800d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.318945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.235 #30 NEW cov: 12235 ft: 14636 corp: 22/488b lim: 30 exec/s: 30 rss: 74Mb L: 25/30 MS: 1 CrossOver- 00:08:41.235 [2024-07-21 18:23:59.388186] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:08:41.235 [2024-07-21 18:23:59.388340] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:08:41.235 [2024-07-21 18:23:59.388472] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bfff 00:08:41.235 [2024-07-21 18:23:59.388606] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001313 00:08:41.235 [2024-07-21 18:23:59.388882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.388915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.388982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.389002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.389067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.389090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.235 [2024-07-21 18:23:59.389158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.235 [2024-07-21 18:23:59.389178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.235 #31 NEW cov: 12235 ft: 14658 corp: 23/516b lim: 30 exec/s: 31 rss: 74Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:08:41.505 [2024-07-21 18:23:59.458353] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.505 [2024-07-21 18:23:59.458496] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.505 [2024-07-21 18:23:59.458629] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.458879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.458913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.458983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.459004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.459072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.459092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.506 #32 NEW cov: 12235 ft: 14661 corp: 24/534b lim: 30 exec/s: 32 rss: 74Mb L: 18/30 MS: 1 ShuffleBytes- 00:08:41.506 [2024-07-21 18:23:59.508481] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.508622] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.508753] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fff7 00:08:41.506 [2024-07-21 18:23:59.509008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.509041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.509109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.509129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.509198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.509226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.506 #33 NEW cov: 12235 ft: 14690 corp: 25/552b lim: 30 exec/s: 33 rss: 74Mb L: 18/30 MS: 1 ChangeBit- 00:08:41.506 [2024-07-21 18:23:59.578743] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:08:41.506 [2024-07-21 18:23:59.578888] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:08:41.506 [2024-07-21 18:23:59.579019] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.579150] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.579413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.579450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.579520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.579540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.579609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.579628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.579694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.579714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.506 #34 NEW cov: 12235 ft: 14707 corp: 26/577b lim: 30 exec/s: 34 rss: 74Mb L: 25/30 MS: 1 CopyPart- 00:08:41.506 [2024-07-21 18:23:59.638851] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.638989] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.639118] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.639263] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (849920) > buf size (4096) 00:08:41.506 [2024-07-21 18:23:59.639534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.639567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.639639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.639658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.639724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.639743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.639808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3dff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.639827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.506 #35 NEW cov: 12235 ft: 14725 corp: 27/606b lim: 30 exec/s: 35 rss: 74Mb L: 29/30 MS: 1 CrossOver- 00:08:41.506 [2024-07-21 18:23:59.688949] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:41.506 [2024-07-21 18:23:59.689091] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:41.506 [2024-07-21 18:23:59.689345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.689379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.506 [2024-07-21 18:23:59.689449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.506 [2024-07-21 18:23:59.689472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.765 #36 NEW cov: 12235 ft: 14758 corp: 28/619b lim: 30 exec/s: 18 rss: 74Mb L: 13/30 MS: 1 ShuffleBytes- 00:08:41.765 #36 DONE cov: 12235 ft: 14758 corp: 28/619b lim: 30 exec/s: 18 rss: 74Mb 00:08:41.765 ###### Recommended dictionary. ###### 00:08:41.765 "\000\004\000\000\000\000\000\000" # Uses: 0 00:08:41.765 "\000\000" # Uses: 0 00:08:41.765 ###### End of recommended dictionary. ###### 00:08:41.765 Done 36 runs in 2 second(s) 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:41.765 18:23:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:41.765 [2024-07-21 18:23:59.936144] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:41.765 [2024-07-21 18:23:59.936228] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3817311 ] 00:08:42.022 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.023 [2024-07-21 18:24:00.177875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.280 [2024-07-21 18:24:00.266630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.280 [2024-07-21 18:24:00.330826] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.280 [2024-07-21 18:24:00.347066] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:42.280 INFO: Running with entropic power schedule (0xFF, 100). 00:08:42.280 INFO: Seed: 297873223 00:08:42.280 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:42.280 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:42.280 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:42.280 INFO: A corpus is not provided, starting from an empty corpus 00:08:42.280 #2 INITED exec/s: 0 rss: 65Mb 00:08:42.280 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:42.280 This may also happen if the target rejected all inputs we tried so far 00:08:42.845 NEW_FUNC[1/682]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:42.845 NEW_FUNC[2/682]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:42.845 #3 NEW cov: 11765 ft: 11787 corp: 2/10b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000?"- 00:08:42.846 [2024-07-21 18:24:00.894232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a01003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.846 [2024-07-21 18:24:00.894287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.846 NEW_FUNC[1/15]: 0x179f950 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:08:42.846 NEW_FUNC[2/15]: 0x179fb90 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:08:42.846 #9 NEW cov: 12054 ft: 12579 corp: 3/27b lim: 35 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:08:42.846 [2024-07-21 18:24:00.974073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0100000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.846 [2024-07-21 18:24:00.974109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 #10 NEW cov: 12060 ft: 13262 corp: 4/36b lim: 35 exec/s: 0 rss: 72Mb L: 9/17 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:08:42.846 #13 NEW cov: 12145 ft: 13591 corp: 5/45b lim: 35 exec/s: 0 rss: 73Mb L: 9/17 MS: 3 ChangeBinInt-ShuffleBytes-PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:08:43.110 [2024-07-21 18:24:01.074352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.110 [2024-07-21 18:24:01.074389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.110 #14 NEW cov: 12145 ft: 13674 corp: 6/54b lim: 35 exec/s: 0 rss: 73Mb L: 9/17 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:43.110 [2024-07-21 18:24:01.144849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a01003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.110 [2024-07-21 18:24:01.144885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.110 #15 NEW cov: 12145 ft: 13727 corp: 7/72b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 InsertByte- 00:08:43.110 #16 NEW cov: 12145 ft: 13800 corp: 8/81b lim: 35 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 ChangeBit- 00:08:43.110 [2024-07-21 18:24:01.285193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a01003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.110 [2024-07-21 18:24:01.285235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.110 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:43.110 #17 NEW cov: 12168 ft: 13846 corp: 9/98b lim: 35 exec/s: 0 rss: 73Mb L: 17/18 MS: 1 ChangeBinInt- 00:08:43.369 #18 NEW cov: 12168 ft: 13897 corp: 10/108b lim: 35 exec/s: 0 rss: 73Mb L: 10/18 MS: 1 InsertByte- 00:08:43.369 [2024-07-21 18:24:01.385207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.385249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.369 #19 NEW cov: 12168 ft: 13987 corp: 11/115b lim: 35 exec/s: 19 rss: 73Mb L: 7/18 MS: 1 EraseBytes- 00:08:43.369 [2024-07-21 18:24:01.455697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.455734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.369 #20 NEW cov: 12168 ft: 14004 corp: 12/132b lim: 35 exec/s: 20 rss: 73Mb L: 17/18 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:43.369 [2024-07-21 18:24:01.505747] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.369 [2024-07-21 18:24:01.505906] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.369 [2024-07-21 18:24:01.506043] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.369 [2024-07-21 18:24:01.506401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a00003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.506437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.369 [2024-07-21 18:24:01.506510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.506533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.369 [2024-07-21 18:24:01.506599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.506622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.369 [2024-07-21 18:24:01.506693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00005d00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.506715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.369 #21 NEW cov: 12177 ft: 14613 corp: 13/167b lim: 35 exec/s: 21 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:43.369 [2024-07-21 18:24:01.575993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0aff003f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.369 [2024-07-21 18:24:01.576028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.627 #22 NEW cov: 12177 ft: 14623 corp: 14/184b lim: 35 exec/s: 22 rss: 73Mb L: 17/35 MS: 1 ChangeBinInt- 00:08:43.627 [2024-07-21 18:24:01.646218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a01003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.646252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.627 #23 NEW cov: 12177 ft: 14629 corp: 15/202b lim: 35 exec/s: 23 rss: 73Mb L: 18/35 MS: 1 CrossOver- 00:08:43.627 [2024-07-21 18:24:01.696296] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.627 [2024-07-21 18:24:01.696442] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.627 [2024-07-21 18:24:01.696575] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.627 [2024-07-21 18:24:01.696910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a00003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.696946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.627 [2024-07-21 18:24:01.697019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.697046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.627 [2024-07-21 18:24:01.697120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:20000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.697142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.627 [2024-07-21 18:24:01.697218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00005d00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.697241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.627 #24 NEW cov: 12177 ft: 14666 corp: 16/237b lim: 35 exec/s: 24 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:08:43.627 [2024-07-21 18:24:01.766544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3f00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.766580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.627 #25 NEW cov: 12177 ft: 14693 corp: 17/254b lim: 35 exec/s: 25 rss: 73Mb L: 17/35 MS: 1 ShuffleBytes- 00:08:43.627 [2024-07-21 18:24:01.836259] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.627 [2024-07-21 18:24:01.836543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.627 [2024-07-21 18:24:01.836581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.884 #26 NEW cov: 12177 ft: 14766 corp: 18/263b lim: 35 exec/s: 26 rss: 73Mb L: 9/35 MS: 1 ShuffleBytes- 00:08:43.884 [2024-07-21 18:24:01.886658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00fe cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:01.886693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.884 #27 NEW cov: 12177 ft: 14793 corp: 19/270b lim: 35 exec/s: 27 rss: 73Mb L: 7/35 MS: 1 ChangeBit- 00:08:43.884 [2024-07-21 18:24:01.956881] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.884 [2024-07-21 18:24:01.957029] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.884 [2024-07-21 18:24:01.957164] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.884 [2024-07-21 18:24:01.957313] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.884 [2024-07-21 18:24:01.957580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:01.957615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.884 [2024-07-21 18:24:01.957689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:01.957712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.884 [2024-07-21 18:24:01.957783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:01.957805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.884 [2024-07-21 18:24:01.957879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:01.957905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.884 [2024-07-21 18:24:01.957975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:01.957997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.884 #28 NEW cov: 12177 ft: 14841 corp: 20/305b lim: 35 exec/s: 28 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:43.884 #29 NEW cov: 12177 ft: 14891 corp: 21/314b lim: 35 exec/s: 29 rss: 73Mb L: 9/35 MS: 1 EraseBytes- 00:08:43.884 [2024-07-21 18:24:02.066937] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:43.884 [2024-07-21 18:24:02.067225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.884 [2024-07-21 18:24:02.067261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.884 #31 NEW cov: 12177 ft: 14909 corp: 22/321b lim: 35 exec/s: 31 rss: 73Mb L: 7/35 MS: 2 ChangeByte-CrossOver- 00:08:44.142 [2024-07-21 18:24:02.117254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0b0b000b cdw11:0b000b0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.142 [2024-07-21 18:24:02.117288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.142 #36 NEW cov: 12177 ft: 14919 corp: 23/331b lim: 35 exec/s: 36 rss: 73Mb L: 10/35 MS: 5 CopyPart-ChangeByte-CrossOver-ChangeBit-InsertRepeatedBytes- 00:08:44.142 [2024-07-21 18:24:02.167404] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:44.142 [2024-07-21 18:24:02.167729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3f010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.142 [2024-07-21 18:24:02.167766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.142 #37 NEW cov: 12177 ft: 14929 corp: 24/349b lim: 35 exec/s: 37 rss: 73Mb L: 18/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:08:44.142 [2024-07-21 18:24:02.217523] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:44.142 [2024-07-21 18:24:02.217976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.142 [2024-07-21 18:24:02.218014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.142 [2024-07-21 18:24:02.218086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0100000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.142 [2024-07-21 18:24:02.218106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.142 #38 NEW cov: 12177 ft: 15122 corp: 25/372b lim: 35 exec/s: 38 rss: 73Mb L: 23/35 MS: 1 CopyPart- 00:08:44.142 #39 NEW cov: 12177 ft: 15177 corp: 26/382b lim: 35 exec/s: 39 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:08:44.142 [2024-07-21 18:24:02.337669] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:44.142 [2024-07-21 18:24:02.337954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00380000 cdw11:010000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.142 [2024-07-21 18:24:02.337990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.400 #44 NEW cov: 12177 ft: 15179 corp: 27/395b lim: 35 exec/s: 44 rss: 74Mb L: 13/35 MS: 5 EraseBytes-CrossOver-CrossOver-ChangeByte-PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:08:44.400 [2024-07-21 18:24:02.388024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01000043 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.400 [2024-07-21 18:24:02.388058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.400 #45 NEW cov: 12177 ft: 15207 corp: 28/405b lim: 35 exec/s: 22 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:08:44.400 #45 DONE cov: 12177 ft: 15207 corp: 28/405b lim: 35 exec/s: 22 rss: 74Mb 00:08:44.400 ###### Recommended dictionary. ###### 00:08:44.400 "\001\000\000\000\000\000\000?" # Uses: 5 00:08:44.400 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:44.400 ###### End of recommended dictionary. ###### 00:08:44.400 Done 45 runs in 2 second(s) 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:44.400 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:44.401 18:24:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:44.401 [2024-07-21 18:24:02.610707] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:44.401 [2024-07-21 18:24:02.610781] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3817798 ] 00:08:44.659 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.659 [2024-07-21 18:24:02.865855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.917 [2024-07-21 18:24:02.954785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.917 [2024-07-21 18:24:03.018985] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.917 [2024-07-21 18:24:03.035230] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:44.917 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.917 INFO: Seed: 2986488742 00:08:44.917 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:44.917 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:44.917 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:44.917 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.917 #2 INITED exec/s: 0 rss: 65Mb 00:08:44.917 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.917 This may also happen if the target rejected all inputs we tried so far 00:08:45.460 NEW_FUNC[1/686]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:45.460 NEW_FUNC[2/686]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:45.460 #4 NEW cov: 11835 ft: 11836 corp: 2/12b lim: 20 exec/s: 0 rss: 72Mb L: 11/11 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:45.460 #6 NEW cov: 11965 ft: 12460 corp: 3/23b lim: 20 exec/s: 0 rss: 72Mb L: 11/11 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:45.460 #7 NEW cov: 11988 ft: 12995 corp: 4/39b lim: 20 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:45.460 #8 NEW cov: 12077 ft: 13454 corp: 5/54b lim: 20 exec/s: 0 rss: 73Mb L: 15/16 MS: 1 InsertRepeatedBytes- 00:08:45.460 #9 NEW cov: 12077 ft: 13523 corp: 6/70b lim: 20 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 ChangeByte- 00:08:45.718 #10 NEW cov: 12077 ft: 13626 corp: 7/81b lim: 20 exec/s: 0 rss: 73Mb L: 11/16 MS: 1 ShuffleBytes- 00:08:45.718 #11 NEW cov: 12077 ft: 13728 corp: 8/92b lim: 20 exec/s: 0 rss: 73Mb L: 11/16 MS: 1 ShuffleBytes- 00:08:45.718 #12 NEW cov: 12077 ft: 13805 corp: 9/109b lim: 20 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 InsertByte- 00:08:45.718 #13 NEW cov: 12077 ft: 14069 corp: 10/115b lim: 20 exec/s: 0 rss: 73Mb L: 6/17 MS: 1 InsertRepeatedBytes- 00:08:45.975 #14 NEW cov: 12077 ft: 14166 corp: 11/126b lim: 20 exec/s: 0 rss: 73Mb L: 11/17 MS: 1 ChangeBit- 00:08:45.975 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:45.975 #15 NEW cov: 12100 ft: 14214 corp: 12/137b lim: 20 exec/s: 0 rss: 73Mb L: 11/17 MS: 1 EraseBytes- 00:08:45.975 #16 NEW cov: 12100 ft: 14240 corp: 13/156b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CrossOver- 00:08:45.975 #17 NEW cov: 12100 ft: 14271 corp: 14/171b lim: 20 exec/s: 17 rss: 73Mb L: 15/19 MS: 1 ChangeByte- 00:08:45.975 #18 NEW cov: 12100 ft: 14323 corp: 15/175b lim: 20 exec/s: 18 rss: 73Mb L: 4/19 MS: 1 InsertRepeatedBytes- 00:08:46.232 #19 NEW cov: 12100 ft: 14393 corp: 16/195b lim: 20 exec/s: 19 rss: 73Mb L: 20/20 MS: 1 CMP- DE: "\377\377\377\377"- 00:08:46.232 #20 NEW cov: 12100 ft: 14439 corp: 17/204b lim: 20 exec/s: 20 rss: 73Mb L: 9/20 MS: 1 EraseBytes- 00:08:46.232 #22 NEW cov: 12100 ft: 14471 corp: 18/213b lim: 20 exec/s: 22 rss: 73Mb L: 9/20 MS: 2 ChangeBinInt-CMP- DE: "\000\000\177\352L\016|\365"- 00:08:46.232 #23 NEW cov: 12100 ft: 14487 corp: 19/228b lim: 20 exec/s: 23 rss: 73Mb L: 15/20 MS: 1 CopyPart- 00:08:46.490 #24 NEW cov: 12100 ft: 14506 corp: 20/245b lim: 20 exec/s: 24 rss: 74Mb L: 17/20 MS: 1 CMP- DE: "\377*\360\204\222\271\020\274"- 00:08:46.490 #30 NEW cov: 12100 ft: 14524 corp: 21/264b lim: 20 exec/s: 30 rss: 74Mb L: 19/20 MS: 1 ChangeBinInt- 00:08:46.490 #31 NEW cov: 12100 ft: 14540 corp: 22/273b lim: 20 exec/s: 31 rss: 74Mb L: 9/20 MS: 1 ChangeBinInt- 00:08:46.490 #32 NEW cov: 12100 ft: 14579 corp: 23/283b lim: 20 exec/s: 32 rss: 74Mb L: 10/20 MS: 1 EraseBytes- 00:08:46.749 #33 NEW cov: 12100 ft: 14603 corp: 24/289b lim: 20 exec/s: 33 rss: 74Mb L: 6/20 MS: 1 EraseBytes- 00:08:46.749 #34 NEW cov: 12100 ft: 14613 corp: 25/300b lim: 20 exec/s: 34 rss: 74Mb L: 11/20 MS: 1 ChangeByte- 00:08:46.749 #35 NEW cov: 12100 ft: 14616 corp: 26/315b lim: 20 exec/s: 35 rss: 74Mb L: 15/20 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:08:46.749 #36 NEW cov: 12100 ft: 14617 corp: 27/335b lim: 20 exec/s: 36 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:46.749 #37 NEW cov: 12100 ft: 14627 corp: 28/355b lim: 20 exec/s: 37 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:08:47.007 #38 NEW cov: 12100 ft: 14642 corp: 29/375b lim: 20 exec/s: 38 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:08:47.007 #39 NEW cov: 12100 ft: 14710 corp: 30/392b lim: 20 exec/s: 19 rss: 74Mb L: 17/20 MS: 1 CopyPart- 00:08:47.007 #39 DONE cov: 12100 ft: 14710 corp: 30/392b lim: 20 exec/s: 19 rss: 74Mb 00:08:47.007 ###### Recommended dictionary. ###### 00:08:47.007 "\377\377\377\377" # Uses: 1 00:08:47.007 "\000\000\177\352L\016|\365" # Uses: 0 00:08:47.007 "\377*\360\204\222\271\020\274" # Uses: 0 00:08:47.007 ###### End of recommended dictionary. ###### 00:08:47.007 Done 39 runs in 2 second(s) 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:47.265 18:24:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:47.265 [2024-07-21 18:24:05.279185] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:47.265 [2024-07-21 18:24:05.279267] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3818363 ] 00:08:47.265 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.523 [2024-07-21 18:24:05.528864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.523 [2024-07-21 18:24:05.620749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.523 [2024-07-21 18:24:05.684926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.524 [2024-07-21 18:24:05.701154] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:47.524 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.524 INFO: Seed: 1357509400 00:08:47.781 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:47.781 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:47.781 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:47.781 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.781 #2 INITED exec/s: 0 rss: 65Mb 00:08:47.781 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.781 This may also happen if the target rejected all inputs we tried so far 00:08:47.781 [2024-07-21 18:24:05.780012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.781 [2024-07-21 18:24:05.780059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.781 [2024-07-21 18:24:05.780170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.781 [2024-07-21 18:24:05.780189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.781 [2024-07-21 18:24:05.780313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.781 [2024-07-21 18:24:05.780332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.781 [2024-07-21 18:24:05.780447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.781 [2024-07-21 18:24:05.780465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.038 NEW_FUNC[1/698]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:48.038 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:48.038 #7 NEW cov: 11939 ft: 11932 corp: 2/30b lim: 35 exec/s: 0 rss: 72Mb L: 29/29 MS: 5 ChangeBit-ChangeBit-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:08:48.296 [2024-07-21 18:24:06.261045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.261097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.261194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.261222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.261322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.261343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.261444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.261465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.296 #8 NEW cov: 12075 ft: 12552 corp: 3/64b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:08:48.296 [2024-07-21 18:24:06.331461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.331494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.331593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.331612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.331706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.331724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.331813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffffcff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.331831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.296 #9 NEW cov: 12081 ft: 12728 corp: 4/93b lim: 35 exec/s: 0 rss: 73Mb L: 29/34 MS: 1 ChangeBinInt- 00:08:48.296 [2024-07-21 18:24:06.381874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.381903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.381995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0c0cff0c cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.382013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.382107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.382124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.382226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.382242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.296 #15 NEW cov: 12166 ft: 12909 corp: 5/125b lim: 35 exec/s: 0 rss: 73Mb L: 32/34 MS: 1 InsertRepeatedBytes- 00:08:48.296 [2024-07-21 18:24:06.431738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.431766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.431856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.431875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.431967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.431984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.296 #16 NEW cov: 12166 ft: 13392 corp: 6/150b lim: 35 exec/s: 0 rss: 73Mb L: 25/34 MS: 1 EraseBytes- 00:08:48.296 [2024-07-21 18:24:06.502627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.502653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.502749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.502765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.502868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.502886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.296 [2024-07-21 18:24:06.502980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffffcff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.296 [2024-07-21 18:24:06.502997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.553 #17 NEW cov: 12166 ft: 13477 corp: 7/179b lim: 35 exec/s: 0 rss: 73Mb L: 29/34 MS: 1 ChangeByte- 00:08:48.553 [2024-07-21 18:24:06.572980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.553 [2024-07-21 18:24:06.573010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.553 [2024-07-21 18:24:06.573097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.553 [2024-07-21 18:24:06.573116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.573203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.573224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.573317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.573336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.554 #18 NEW cov: 12166 ft: 13518 corp: 8/208b lim: 35 exec/s: 0 rss: 73Mb L: 29/34 MS: 1 CrossOver- 00:08:48.554 [2024-07-21 18:24:06.643332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.643361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.643454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.643471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.643567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.643583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.643676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.643693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.554 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:48.554 #19 NEW cov: 12189 ft: 13626 corp: 9/242b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ShuffleBytes- 00:08:48.554 [2024-07-21 18:24:06.703494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.703526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.703621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0cffff0c cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.703643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-21 18:24:06.703732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-21 18:24:06.703750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 #20 NEW cov: 12189 ft: 13673 corp: 10/269b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 EraseBytes- 00:08:48.811 [2024-07-21 18:24:06.773403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.811 [2024-07-21 18:24:06.773431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.811 [2024-07-21 18:24:06.773521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.811 [2024-07-21 18:24:06.773540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.811 #21 NEW cov: 12189 ft: 13921 corp: 11/286b lim: 35 exec/s: 21 rss: 73Mb L: 17/34 MS: 1 EraseBytes- 00:08:48.811 [2024-07-21 18:24:06.844849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8d8d8d8d cdw11:e2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.844877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.844970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff60ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.844989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.845078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.845095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.845192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fcff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.845209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.812 #22 NEW cov: 12189 ft: 14002 corp: 12/319b lim: 35 exec/s: 22 rss: 73Mb L: 33/34 MS: 1 InsertRepeatedBytes- 00:08:48.812 [2024-07-21 18:24:06.894450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8d8d8d8d cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.894479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.894569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fcffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.894587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.812 #23 NEW cov: 12189 ft: 14025 corp: 13/336b lim: 35 exec/s: 23 rss: 73Mb L: 17/34 MS: 1 EraseBytes- 00:08:48.812 [2024-07-21 18:24:06.965846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.965872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.965963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.965982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.966074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.966091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.812 [2024-07-21 18:24:06.966181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-21 18:24:06.966197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.812 #24 NEW cov: 12189 ft: 14040 corp: 14/370b lim: 35 exec/s: 24 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:08:49.069 [2024-07-21 18:24:07.036379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.069 [2024-07-21 18:24:07.036405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.069 [2024-07-21 18:24:07.036506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.069 [2024-07-21 18:24:07.036523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.069 [2024-07-21 18:24:07.036615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffffffa cdw11:ff060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.069 [2024-07-21 18:24:07.036632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.069 [2024-07-21 18:24:07.036721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.069 [2024-07-21 18:24:07.036741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.069 #25 NEW cov: 12189 ft: 14131 corp: 15/404b lim: 35 exec/s: 25 rss: 74Mb L: 34/34 MS: 1 ChangeBinInt- 00:08:49.069 [2024-07-21 18:24:07.096711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.069 [2024-07-21 18:24:07.096736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.069 [2024-07-21 18:24:07.096832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.096850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.096940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.096955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.097048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffffcff cdw11:fcff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.097065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.070 #26 NEW cov: 12189 ft: 14161 corp: 16/433b lim: 35 exec/s: 26 rss: 74Mb L: 29/34 MS: 1 ChangeBinInt- 00:08:49.070 [2024-07-21 18:24:07.147154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.147183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.147279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.147296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.147381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.147396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.147489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffffdff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.147506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.070 #27 NEW cov: 12189 ft: 14177 corp: 17/462b lim: 35 exec/s: 27 rss: 74Mb L: 29/34 MS: 1 ChangeBit- 00:08:49.070 [2024-07-21 18:24:07.206882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.206907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.207003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffefffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.207021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.070 #28 NEW cov: 12189 ft: 14197 corp: 18/479b lim: 35 exec/s: 28 rss: 74Mb L: 17/34 MS: 1 ChangeBit- 00:08:49.070 [2024-07-21 18:24:07.268478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8d8d8d8d cdw11:e2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.268504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.268601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff60ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.268619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.268712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.268729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.268812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.268830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.070 [2024-07-21 18:24:07.268923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.070 [2024-07-21 18:24:07.268941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:49.327 #29 NEW cov: 12189 ft: 14258 corp: 19/514b lim: 35 exec/s: 29 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:08:49.327 [2024-07-21 18:24:07.318764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8dff8d8d cdw11:ff8d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.318792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.318884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff600003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.318902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.318998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.319014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.319107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.319124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.319219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.319237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:49.327 #30 NEW cov: 12189 ft: 14282 corp: 20/549b lim: 35 exec/s: 30 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:08:49.327 [2024-07-21 18:24:07.368331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.368358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.368442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.368459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.368544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.368559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.327 #31 NEW cov: 12189 ft: 14335 corp: 21/571b lim: 35 exec/s: 31 rss: 74Mb L: 22/35 MS: 1 EraseBytes- 00:08:49.327 [2024-07-21 18:24:07.418418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.418444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.418532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.418549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.418636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.418655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.327 #34 NEW cov: 12189 ft: 14357 corp: 22/592b lim: 35 exec/s: 34 rss: 74Mb L: 21/35 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:49.327 [2024-07-21 18:24:07.469602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.469627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.469722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.327 [2024-07-21 18:24:07.469740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.327 [2024-07-21 18:24:07.469824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.469842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.328 [2024-07-21 18:24:07.469927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.469944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.328 [2024-07-21 18:24:07.470026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fffcffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.470045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:49.328 #35 NEW cov: 12189 ft: 14430 corp: 23/627b lim: 35 exec/s: 35 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:08:49.328 [2024-07-21 18:24:07.529722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.529748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.328 [2024-07-21 18:24:07.529839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.529857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.328 [2024-07-21 18:24:07.529948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.529964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.328 [2024-07-21 18:24:07.530057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffffcff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.328 [2024-07-21 18:24:07.530075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.585 #36 NEW cov: 12189 ft: 14471 corp: 24/656b lim: 35 exec/s: 36 rss: 74Mb L: 29/35 MS: 1 ShuffleBytes- 00:08:49.585 [2024-07-21 18:24:07.580051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.580077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.580163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.580181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.580263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffffffa cdw11:ff060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.580284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.580372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00060003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.580389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.585 #37 NEW cov: 12189 ft: 14478 corp: 25/690b lim: 35 exec/s: 37 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\000\006"- 00:08:49.585 [2024-07-21 18:24:07.649553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:a0a0840a cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.649583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.649669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.649687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.585 #39 NEW cov: 12189 ft: 14489 corp: 26/710b lim: 35 exec/s: 39 rss: 74Mb L: 20/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:49.585 [2024-07-21 18:24:07.700664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffe2ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.700692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.700780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.700799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.700894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.700913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.701001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.701021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.585 #40 NEW cov: 12189 ft: 14504 corp: 27/739b lim: 35 exec/s: 40 rss: 74Mb L: 29/35 MS: 1 CrossOver- 00:08:49.585 [2024-07-21 18:24:07.750007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:a0a0840a cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.750034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.585 [2024-07-21 18:24:07.750125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7ba0a0a0 cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.585 [2024-07-21 18:24:07.750144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.585 #41 NEW cov: 12189 ft: 14526 corp: 28/759b lim: 35 exec/s: 20 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:08:49.585 #41 DONE cov: 12189 ft: 14526 corp: 28/759b lim: 35 exec/s: 20 rss: 74Mb 00:08:49.585 ###### Recommended dictionary. ###### 00:08:49.585 "\000\006" # Uses: 0 00:08:49.585 ###### End of recommended dictionary. ###### 00:08:49.585 Done 41 runs in 2 second(s) 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:49.843 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:49.844 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:49.844 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:49.844 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:49.844 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:49.844 18:24:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:49.844 [2024-07-21 18:24:07.976809] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:49.844 [2024-07-21 18:24:07.976883] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3818915 ] 00:08:49.844 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.102 [2024-07-21 18:24:08.230384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.360 [2024-07-21 18:24:08.318936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.360 [2024-07-21 18:24:08.383192] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.360 [2024-07-21 18:24:08.399435] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:50.360 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.360 INFO: Seed: 4053517708 00:08:50.360 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:50.360 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:50.360 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:50.360 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.360 #2 INITED exec/s: 0 rss: 65Mb 00:08:50.360 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.360 This may also happen if the target rejected all inputs we tried so far 00:08:50.360 [2024-07-21 18:24:08.448487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.360 [2024-07-21 18:24:08.448527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.360 [2024-07-21 18:24:08.448592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.360 [2024-07-21 18:24:08.448616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.360 [2024-07-21 18:24:08.448678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.360 [2024-07-21 18:24:08.448697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.360 [2024-07-21 18:24:08.448759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.360 [2024-07-21 18:24:08.448777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.926 NEW_FUNC[1/698]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:50.926 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:50.926 #11 NEW cov: 11950 ft: 11939 corp: 2/44b lim: 45 exec/s: 0 rss: 72Mb L: 43/43 MS: 4 ShuffleBytes-InsertByte-CrossOver-InsertRepeatedBytes- 00:08:50.926 [2024-07-21 18:24:08.939236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.926 [2024-07-21 18:24:08.939288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.926 #12 NEW cov: 12086 ft: 13394 corp: 3/55b lim: 45 exec/s: 0 rss: 72Mb L: 11/43 MS: 1 CrossOver- 00:08:50.926 [2024-07-21 18:24:08.999323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.926 [2024-07-21 18:24:08.999359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.926 #13 NEW cov: 12092 ft: 13606 corp: 4/66b lim: 45 exec/s: 0 rss: 73Mb L: 11/43 MS: 1 ShuffleBytes- 00:08:50.927 [2024-07-21 18:24:09.069474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.927 [2024-07-21 18:24:09.069509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.927 #14 NEW cov: 12177 ft: 13865 corp: 5/77b lim: 45 exec/s: 0 rss: 73Mb L: 11/43 MS: 1 ChangeByte- 00:08:50.927 [2024-07-21 18:24:09.119658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.927 [2024-07-21 18:24:09.119692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.185 #15 NEW cov: 12177 ft: 13965 corp: 6/88b lim: 45 exec/s: 0 rss: 73Mb L: 11/43 MS: 1 ChangeByte- 00:08:51.185 [2024-07-21 18:24:09.189864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.189898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.185 #16 NEW cov: 12177 ft: 14005 corp: 7/99b lim: 45 exec/s: 0 rss: 73Mb L: 11/43 MS: 1 ShuffleBytes- 00:08:51.185 [2024-07-21 18:24:09.240357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.240390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.185 [2024-07-21 18:24:09.240458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.240482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.185 [2024-07-21 18:24:09.240546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.240566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.185 #17 NEW cov: 12177 ft: 14267 corp: 8/129b lim: 45 exec/s: 0 rss: 73Mb L: 30/43 MS: 1 InsertRepeatedBytes- 00:08:51.185 [2024-07-21 18:24:09.290277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00fc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.290311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.185 [2024-07-21 18:24:09.290381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.290401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.185 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:51.185 #18 NEW cov: 12200 ft: 14602 corp: 9/154b lim: 45 exec/s: 0 rss: 73Mb L: 25/43 MS: 1 EraseBytes- 00:08:51.185 [2024-07-21 18:24:09.360308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.185 [2024-07-21 18:24:09.360342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.185 #19 NEW cov: 12200 ft: 14780 corp: 10/166b lim: 45 exec/s: 0 rss: 73Mb L: 12/43 MS: 1 InsertByte- 00:08:51.444 [2024-07-21 18:24:09.410628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00fc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.410663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.410735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.410755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.444 #20 NEW cov: 12200 ft: 14815 corp: 11/191b lim: 45 exec/s: 20 rss: 73Mb L: 25/43 MS: 1 ChangeByte- 00:08:51.444 [2024-07-21 18:24:09.481271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.481305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.481373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.481392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.481459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.481479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.481547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.481566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.444 #21 NEW cov: 12200 ft: 14844 corp: 12/234b lim: 45 exec/s: 21 rss: 73Mb L: 43/43 MS: 1 ChangeBit- 00:08:51.444 [2024-07-21 18:24:09.550999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00fc0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.551033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.551101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.551121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.444 #22 NEW cov: 12200 ft: 14863 corp: 13/259b lim: 45 exec/s: 22 rss: 73Mb L: 25/43 MS: 1 ChangeBinInt- 00:08:51.444 [2024-07-21 18:24:09.600954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.600988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.444 #23 NEW cov: 12200 ft: 14909 corp: 14/270b lim: 45 exec/s: 23 rss: 73Mb L: 11/43 MS: 1 ChangeBit- 00:08:51.444 [2024-07-21 18:24:09.651872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.651907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.651976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.651996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.652062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.652082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.652148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.652167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.444 [2024-07-21 18:24:09.652239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.444 [2024-07-21 18:24:09.652258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:51.703 #24 NEW cov: 12200 ft: 14984 corp: 15/315b lim: 45 exec/s: 24 rss: 73Mb L: 45/45 MS: 1 CrossOver- 00:08:51.703 [2024-07-21 18:24:09.701267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.701301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.703 #25 NEW cov: 12200 ft: 15066 corp: 16/327b lim: 45 exec/s: 25 rss: 73Mb L: 12/45 MS: 1 InsertByte- 00:08:51.703 [2024-07-21 18:24:09.751937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.751971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.703 [2024-07-21 18:24:09.752039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.752063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.703 [2024-07-21 18:24:09.752128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.752147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.703 [2024-07-21 18:24:09.752216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.752235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.703 #26 NEW cov: 12200 ft: 15079 corp: 17/370b lim: 45 exec/s: 26 rss: 73Mb L: 43/45 MS: 1 ChangeBit- 00:08:51.703 [2024-07-21 18:24:09.802181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.802221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.703 [2024-07-21 18:24:09.802292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.802312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.703 [2024-07-21 18:24:09.802384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a6a6a6a6 cdw11:a6a60005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.802403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.703 [2024-07-21 18:24:09.802470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fcfca6a6 cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.802489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.703 #27 NEW cov: 12200 ft: 15116 corp: 18/412b lim: 45 exec/s: 27 rss: 73Mb L: 42/45 MS: 1 InsertRepeatedBytes- 00:08:51.703 [2024-07-21 18:24:09.851665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.703 [2024-07-21 18:24:09.851700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.703 #28 NEW cov: 12200 ft: 15139 corp: 19/421b lim: 45 exec/s: 28 rss: 73Mb L: 9/45 MS: 1 EraseBytes- 00:08:51.962 [2024-07-21 18:24:09.922640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:15150000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.922674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:09.922743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15150000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.922763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:09.922830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15150000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.922849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:09.922916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:15151515 cdw11:15150000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.922939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:09.923005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00001500 cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.923024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:51.962 #29 NEW cov: 12200 ft: 15160 corp: 20/466b lim: 45 exec/s: 29 rss: 73Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:51.962 [2024-07-21 18:24:09.992453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.992486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:09.992556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.992575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:09.992638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:09.992657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:10.042618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.042658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:10.042729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.042750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:10.042821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.042841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.962 #31 NEW cov: 12200 ft: 15177 corp: 21/493b lim: 45 exec/s: 31 rss: 73Mb L: 27/45 MS: 2 EraseBytes-CopyPart- 00:08:51.962 [2024-07-21 18:24:10.092389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00001000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.092425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.962 #32 NEW cov: 12200 ft: 15180 corp: 22/504b lim: 45 exec/s: 32 rss: 73Mb L: 11/45 MS: 1 ChangeBit- 00:08:51.962 [2024-07-21 18:24:10.163156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.163190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:10.163266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fca60007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.163287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:10.163356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a6a6a6a6 cdw11:a6a60005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.163380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.962 [2024-07-21 18:24:10.163448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a6fca6a6 cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.962 [2024-07-21 18:24:10.163467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.220 #33 NEW cov: 12200 ft: 15182 corp: 23/547b lim: 45 exec/s: 33 rss: 74Mb L: 43/45 MS: 1 CrossOver- 00:08:52.220 [2024-07-21 18:24:10.233352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.233385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.233456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfc00fc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.233477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.233548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a6a6a6a6 cdw11:a6410005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.233566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.233635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fcfca6a6 cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.233654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.220 #34 NEW cov: 12200 ft: 15226 corp: 24/589b lim: 45 exec/s: 34 rss: 74Mb L: 42/45 MS: 1 ChangeByte- 00:08:52.220 [2024-07-21 18:24:10.283291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:35350035 cdw11:35350001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.283326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.283396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35350001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.283416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.283482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.283500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.220 #35 NEW cov: 12200 ft: 15278 corp: 25/622b lim: 45 exec/s: 35 rss: 74Mb L: 33/45 MS: 1 InsertRepeatedBytes- 00:08:52.220 [2024-07-21 18:24:10.353277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00002600 cdw11:00fc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.353313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.353385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfc0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.353408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.220 #36 NEW cov: 12200 ft: 15290 corp: 26/647b lim: 45 exec/s: 36 rss: 74Mb L: 25/45 MS: 1 ChangeByte- 00:08:52.220 [2024-07-21 18:24:10.404003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.404037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.404109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.404129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.404194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.404225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.404293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.404313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.220 [2024-07-21 18:24:10.404381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.220 [2024-07-21 18:24:10.404400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:52.477 #37 NEW cov: 12200 ft: 15293 corp: 27/692b lim: 45 exec/s: 18 rss: 74Mb L: 45/45 MS: 1 ShuffleBytes- 00:08:52.477 #37 DONE cov: 12200 ft: 15293 corp: 27/692b lim: 45 exec/s: 18 rss: 74Mb 00:08:52.477 Done 37 runs in 2 second(s) 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:52.477 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:52.478 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:52.478 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:52.478 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:52.478 18:24:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:52.478 [2024-07-21 18:24:10.647655] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:52.478 [2024-07-21 18:24:10.647731] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819264 ] 00:08:52.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.736 [2024-07-21 18:24:10.895964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.993 [2024-07-21 18:24:10.985344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.993 [2024-07-21 18:24:11.049581] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.993 [2024-07-21 18:24:11.065805] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:52.993 INFO: Running with entropic power schedule (0xFF, 100). 00:08:52.993 INFO: Seed: 2427551200 00:08:52.993 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:52.993 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:52.993 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:52.993 INFO: A corpus is not provided, starting from an empty corpus 00:08:52.993 #2 INITED exec/s: 0 rss: 65Mb 00:08:52.993 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:52.993 This may also happen if the target rejected all inputs we tried so far 00:08:52.993 [2024-07-21 18:24:11.121437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:52.993 [2024-07-21 18:24:11.121476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.559 NEW_FUNC[1/696]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:08:53.559 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:53.559 #3 NEW cov: 11873 ft: 11874 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:08:53.559 [2024-07-21 18:24:11.612577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:53.559 [2024-07-21 18:24:11.612626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.559 #4 NEW cov: 12003 ft: 12433 corp: 3/6b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CrossOver- 00:08:53.559 [2024-07-21 18:24:11.682647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a26 cdw11:00000000 00:08:53.559 [2024-07-21 18:24:11.682688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.559 #5 NEW cov: 12009 ft: 12674 corp: 4/8b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ChangeBinInt- 00:08:53.559 [2024-07-21 18:24:11.733184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:53.559 [2024-07-21 18:24:11.733226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.559 [2024-07-21 18:24:11.733291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002bf0 cdw11:00000000 00:08:53.559 [2024-07-21 18:24:11.733310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.559 [2024-07-21 18:24:11.733373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000887a cdw11:00000000 00:08:53.559 [2024-07-21 18:24:11.733391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.559 [2024-07-21 18:24:11.733460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006a9f cdw11:00000000 00:08:53.559 [2024-07-21 18:24:11.733479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.559 #6 NEW cov: 12094 ft: 13252 corp: 5/17b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CMP- DE: "\000+\360\210zj\237\220"- 00:08:53.818 [2024-07-21 18:24:11.783443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.783477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.783541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002bf0 cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.783561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.783625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000887a cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.783643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.783705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006a0a cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.783724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.783785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f90 cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.783803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:53.818 #7 NEW cov: 12094 ft: 13362 corp: 6/27b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:08:53.818 [2024-07-21 18:24:11.853553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.853586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.853649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.853668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.853730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.853748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.853808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.853827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.818 #8 NEW cov: 12094 ft: 13399 corp: 7/36b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:08:53.818 [2024-07-21 18:24:11.903270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.903303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.818 #9 NEW cov: 12094 ft: 13521 corp: 8/38b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:08:53.818 [2024-07-21 18:24:11.973836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.973870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.973936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.973956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.974018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.974036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.818 [2024-07-21 18:24:11.974095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002cff cdw11:00000000 00:08:53.818 [2024-07-21 18:24:11.974113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.818 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:53.818 #10 NEW cov: 12117 ft: 13571 corp: 9/47b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:08:54.076 [2024-07-21 18:24:12.044046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.044081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.044142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002bf0 cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.044161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.044226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000887a cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.044245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.044307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002a9f cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.044326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.076 #11 NEW cov: 12117 ft: 13601 corp: 10/56b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:08:54.076 [2024-07-21 18:24:12.094189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e2a cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.094229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.094294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.094313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.094375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.094393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.094455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.094474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.076 #12 NEW cov: 12117 ft: 13639 corp: 11/65b lim: 10 exec/s: 12 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:08:54.076 [2024-07-21 18:24:12.144337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e2a cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.144372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.144439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003e00 cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.144458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.144518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.144537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.144596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.144615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.076 #13 NEW cov: 12117 ft: 13657 corp: 12/74b lim: 10 exec/s: 13 rss: 73Mb L: 9/10 MS: 1 CMP- DE: ">\000\000\000"- 00:08:54.076 [2024-07-21 18:24:12.214376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.214409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.214470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.214489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.214551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f088 cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.214570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.076 #15 NEW cov: 12117 ft: 13840 corp: 13/80b lim: 10 exec/s: 15 rss: 73Mb L: 6/10 MS: 2 EraseBytes-CrossOver- 00:08:54.076 [2024-07-21 18:24:12.264504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.264539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.264602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000202b cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.264622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.076 [2024-07-21 18:24:12.264686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f088 cdw11:00000000 00:08:54.076 [2024-07-21 18:24:12.264704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.334 #16 NEW cov: 12117 ft: 13893 corp: 14/86b lim: 10 exec/s: 16 rss: 73Mb L: 6/10 MS: 1 ChangeBit- 00:08:54.334 [2024-07-21 18:24:12.334409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:54.334 [2024-07-21 18:24:12.334443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.334 #18 NEW cov: 12117 ft: 13905 corp: 15/88b lim: 10 exec/s: 18 rss: 73Mb L: 2/10 MS: 2 CrossOver-CopyPart- 00:08:54.334 [2024-07-21 18:24:12.384554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c10a cdw11:00000000 00:08:54.334 [2024-07-21 18:24:12.384588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.334 #19 NEW cov: 12117 ft: 13917 corp: 16/90b lim: 10 exec/s: 19 rss: 73Mb L: 2/10 MS: 1 InsertByte- 00:08:54.334 [2024-07-21 18:24:12.435276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000088 cdw11:00000000 00:08:54.334 [2024-07-21 18:24:12.435313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.334 [2024-07-21 18:24:12.435376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000af0 cdw11:00000000 00:08:54.334 [2024-07-21 18:24:12.435395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.334 [2024-07-21 18:24:12.435454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007a2b cdw11:00000000 00:08:54.334 [2024-07-21 18:24:12.435473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.334 [2024-07-21 18:24:12.435535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006a0a cdw11:00000000 00:08:54.335 [2024-07-21 18:24:12.435553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.335 [2024-07-21 18:24:12.435619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f90 cdw11:00000000 00:08:54.335 [2024-07-21 18:24:12.435637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:54.335 #20 NEW cov: 12117 ft: 13937 corp: 17/100b lim: 10 exec/s: 20 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:54.335 [2024-07-21 18:24:12.504913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:54.335 [2024-07-21 18:24:12.504946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.593 #21 NEW cov: 12117 ft: 13986 corp: 18/102b lim: 10 exec/s: 21 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:54.593 [2024-07-21 18:24:12.575269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.575302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.575365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000260a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.575384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.593 #22 NEW cov: 12117 ft: 14124 corp: 19/106b lim: 10 exec/s: 22 rss: 74Mb L: 4/10 MS: 1 CrossOver- 00:08:54.593 [2024-07-21 18:24:12.645837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000088 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.645870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.645934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000088 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.645953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.646017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000af0 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.646036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.646100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007a0a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.646119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.646181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f90 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.646200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:54.593 #23 NEW cov: 12117 ft: 14163 corp: 20/116b lim: 10 exec/s: 23 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:08:54.593 [2024-07-21 18:24:12.716033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.716068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.716130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002bf0 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.716149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.716221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007a0a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.716240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.716302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000886a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.716321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.716383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f90 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.716404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:54.593 #24 NEW cov: 12117 ft: 14215 corp: 21/126b lim: 10 exec/s: 24 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:54.593 [2024-07-21 18:24:12.766187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000088 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.766227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.766294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000af0 cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.766314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.766377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007a2b cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.766397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.766460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006a0a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.766479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.593 [2024-07-21 18:24:12.766542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f0a cdw11:00000000 00:08:54.593 [2024-07-21 18:24:12.766562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:54.593 #25 NEW cov: 12117 ft: 14225 corp: 22/136b lim: 10 exec/s: 25 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:08:54.851 [2024-07-21 18:24:12.815808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ad9 cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.815841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.851 #26 NEW cov: 12117 ft: 14236 corp: 23/139b lim: 10 exec/s: 26 rss: 74Mb L: 3/10 MS: 1 ChangeBinInt- 00:08:54.851 [2024-07-21 18:24:12.866461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.866493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.866559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002bf0 cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.866578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.866641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007a0a cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.866660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.866722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000886a cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.866741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.866803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.866822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:54.851 #27 NEW cov: 12117 ft: 14248 corp: 24/149b lim: 10 exec/s: 27 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:08:54.851 [2024-07-21 18:24:12.936380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a88 cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.936413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.936477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007a6a cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.936497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.936561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009f90 cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.936580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.851 #28 NEW cov: 12117 ft: 14262 corp: 25/155b lim: 10 exec/s: 28 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:08:54.851 [2024-07-21 18:24:12.986640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.986673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.986737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:54.851 [2024-07-21 18:24:12.986756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.851 [2024-07-21 18:24:12.986819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:54.852 [2024-07-21 18:24:12.986837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.852 [2024-07-21 18:24:12.986898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff29 cdw11:00000000 00:08:54.852 [2024-07-21 18:24:12.986917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.852 #29 NEW cov: 12117 ft: 14300 corp: 26/164b lim: 10 exec/s: 29 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:08:54.852 [2024-07-21 18:24:13.036662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:08:54.852 [2024-07-21 18:24:13.036697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.852 [2024-07-21 18:24:13.036759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a01 cdw11:00000000 00:08:54.852 [2024-07-21 18:24:13.036782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.852 [2024-07-21 18:24:13.036843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:54.852 [2024-07-21 18:24:13.036862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.109 #30 NEW cov: 12117 ft: 14318 corp: 27/171b lim: 10 exec/s: 30 rss: 74Mb L: 7/10 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:55.109 [2024-07-21 18:24:13.086932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a88 cdw11:00000000 00:08:55.109 [2024-07-21 18:24:13.086966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.109 [2024-07-21 18:24:13.087028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007a6a cdw11:00000000 00:08:55.109 [2024-07-21 18:24:13.087047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.110 [2024-07-21 18:24:13.087109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009f90 cdw11:00000000 00:08:55.110 [2024-07-21 18:24:13.087127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.110 [2024-07-21 18:24:13.087188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fdfd cdw11:00000000 00:08:55.110 [2024-07-21 18:24:13.087207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.110 #31 NEW cov: 12117 ft: 14329 corp: 28/180b lim: 10 exec/s: 15 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:08:55.110 #31 DONE cov: 12117 ft: 14329 corp: 28/180b lim: 10 exec/s: 15 rss: 74Mb 00:08:55.110 ###### Recommended dictionary. ###### 00:08:55.110 "\000+\360\210zj\237\220" # Uses: 0 00:08:55.110 ">\000\000\000" # Uses: 0 00:08:55.110 "\001\000\000\000" # Uses: 0 00:08:55.110 ###### End of recommended dictionary. ###### 00:08:55.110 Done 31 runs in 2 second(s) 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:55.110 18:24:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:08:55.367 [2024-07-21 18:24:13.330651] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:55.367 [2024-07-21 18:24:13.330723] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819624 ] 00:08:55.367 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.626 [2024-07-21 18:24:13.585474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.626 [2024-07-21 18:24:13.674103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.626 [2024-07-21 18:24:13.738376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.626 [2024-07-21 18:24:13.754609] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:08:55.626 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.626 INFO: Seed: 820590461 00:08:55.626 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:55.626 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:55.626 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:55.626 INFO: A corpus is not provided, starting from an empty corpus 00:08:55.626 #2 INITED exec/s: 0 rss: 65Mb 00:08:55.626 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:55.626 This may also happen if the target rejected all inputs we tried so far 00:08:55.626 [2024-07-21 18:24:13.810403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:55.626 [2024-07-21 18:24:13.810441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.626 [2024-07-21 18:24:13.810502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:55.626 [2024-07-21 18:24:13.810521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.626 [2024-07-21 18:24:13.810582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:55.626 [2024-07-21 18:24:13.810600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.143 NEW_FUNC[1/695]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:08:56.143 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:56.143 #4 NEW cov: 11850 ft: 11848 corp: 2/8b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:56.143 [2024-07-21 18:24:14.291860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.143 [2024-07-21 18:24:14.291911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.143 [2024-07-21 18:24:14.291977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.143 [2024-07-21 18:24:14.291997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.143 [2024-07-21 18:24:14.292060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.143 [2024-07-21 18:24:14.292083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.143 [2024-07-21 18:24:14.292147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:56.143 [2024-07-21 18:24:14.292166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.143 NEW_FUNC[1/1]: 0x1d9ca70 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1065 00:08:56.143 #5 NEW cov: 12003 ft: 12629 corp: 3/16b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:08:56.401 [2024-07-21 18:24:14.371972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.372008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.372075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.372095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.372156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.372175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.372244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.372263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.401 #6 NEW cov: 12009 ft: 12848 corp: 4/24b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:08:56.401 [2024-07-21 18:24:14.422292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.422327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.422391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.422411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.422476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.422495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.422558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.422577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.401 [2024-07-21 18:24:14.422640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.422660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:56.401 #7 NEW cov: 12094 ft: 13133 corp: 5/34b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:08:56.401 [2024-07-21 18:24:14.491862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002727 cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.491896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.401 #9 NEW cov: 12094 ft: 13455 corp: 6/36b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 2 ChangeByte-CopyPart- 00:08:56.401 [2024-07-21 18:24:14.542108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005927 cdw11:00000000 00:08:56.401 [2024-07-21 18:24:14.542141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.402 #10 NEW cov: 12094 ft: 13569 corp: 7/39b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:08:56.402 [2024-07-21 18:24:14.612693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b5b5 cdw11:00000000 00:08:56.402 [2024-07-21 18:24:14.612726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.402 [2024-07-21 18:24:14.612794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b5b5 cdw11:00000000 00:08:56.402 [2024-07-21 18:24:14.612813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.402 [2024-07-21 18:24:14.612879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b5b5 cdw11:00000000 00:08:56.402 [2024-07-21 18:24:14.612899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.402 [2024-07-21 18:24:14.612964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002727 cdw11:00000000 00:08:56.402 [2024-07-21 18:24:14.612983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.660 #11 NEW cov: 12094 ft: 13648 corp: 8/47b lim: 10 exec/s: 0 rss: 73Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:08:56.660 [2024-07-21 18:24:14.662799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.662832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.662898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.662918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.662982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.663001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.663064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff76 cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.663083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.660 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:56.660 #12 NEW cov: 12117 ft: 13749 corp: 9/56b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 InsertByte- 00:08:56.660 [2024-07-21 18:24:14.733088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.733121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.733186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.733205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.733274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.733292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.733361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff60 cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.733380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.660 #13 NEW cov: 12117 ft: 13809 corp: 10/64b lim: 10 exec/s: 0 rss: 73Mb L: 8/10 MS: 1 ChangeByte- 00:08:56.660 [2024-07-21 18:24:14.783192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.783232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.783300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.783320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.783387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.783406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.783472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.783491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.660 #14 NEW cov: 12117 ft: 13846 corp: 11/72b lim: 10 exec/s: 14 rss: 73Mb L: 8/10 MS: 1 CrossOver- 00:08:56.660 [2024-07-21 18:24:14.833020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b5b5 cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.833053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.660 [2024-07-21 18:24:14.833124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b527 cdw11:00000000 00:08:56.660 [2024-07-21 18:24:14.833143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.918 #15 NEW cov: 12117 ft: 14048 corp: 12/77b lim: 10 exec/s: 15 rss: 73Mb L: 5/10 MS: 1 EraseBytes- 00:08:56.918 [2024-07-21 18:24:14.903485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.918 [2024-07-21 18:24:14.903519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.918 [2024-07-21 18:24:14.903584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.918 [2024-07-21 18:24:14.903603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.918 [2024-07-21 18:24:14.903668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fbff cdw11:00000000 00:08:56.918 [2024-07-21 18:24:14.903687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.918 [2024-07-21 18:24:14.903750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff60 cdw11:00000000 00:08:56.918 [2024-07-21 18:24:14.903769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.918 #16 NEW cov: 12117 ft: 14071 corp: 13/85b lim: 10 exec/s: 16 rss: 73Mb L: 8/10 MS: 1 ChangeBit- 00:08:56.919 [2024-07-21 18:24:14.973749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.919 [2024-07-21 18:24:14.973783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.919 [2024-07-21 18:24:14.973857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.919 [2024-07-21 18:24:14.973877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.919 [2024-07-21 18:24:14.973941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fbff cdw11:00000000 00:08:56.919 [2024-07-21 18:24:14.973960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.919 [2024-07-21 18:24:14.974023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000bf60 cdw11:00000000 00:08:56.919 [2024-07-21 18:24:14.974042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.919 #17 NEW cov: 12117 ft: 14092 corp: 14/93b lim: 10 exec/s: 17 rss: 73Mb L: 8/10 MS: 1 ChangeBit- 00:08:56.919 [2024-07-21 18:24:15.043445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000270c cdw11:00000000 00:08:56.919 [2024-07-21 18:24:15.043479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.919 #18 NEW cov: 12117 ft: 14100 corp: 15/95b lim: 10 exec/s: 18 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:08:56.919 [2024-07-21 18:24:15.093635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005927 cdw11:00000000 00:08:56.919 [2024-07-21 18:24:15.093669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.177 #19 NEW cov: 12117 ft: 14143 corp: 16/98b lim: 10 exec/s: 19 rss: 73Mb L: 3/10 MS: 1 ShuffleBytes- 00:08:57.177 [2024-07-21 18:24:15.163995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.164028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.164096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.164116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.177 #20 NEW cov: 12117 ft: 14155 corp: 17/102b lim: 10 exec/s: 20 rss: 73Mb L: 4/10 MS: 1 EraseBytes- 00:08:57.177 [2024-07-21 18:24:15.214370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.214403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.214471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.214491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.214554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.214573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.214637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000600 cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.214656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.177 #21 NEW cov: 12117 ft: 14194 corp: 18/110b lim: 10 exec/s: 21 rss: 73Mb L: 8/10 MS: 1 CMP- DE: "\006\000"- 00:08:57.177 [2024-07-21 18:24:15.264566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.264599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.264667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.264686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.264749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.264769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.264832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff76 cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.264851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.177 #22 NEW cov: 12117 ft: 14212 corp: 19/119b lim: 10 exec/s: 22 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:57.177 [2024-07-21 18:24:15.334784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.177 [2024-07-21 18:24:15.334818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.177 [2024-07-21 18:24:15.334885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.178 [2024-07-21 18:24:15.334906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.178 [2024-07-21 18:24:15.334972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.178 [2024-07-21 18:24:15.334993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.178 [2024-07-21 18:24:15.335057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff76 cdw11:00000000 00:08:57.178 [2024-07-21 18:24:15.335076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.178 #23 NEW cov: 12117 ft: 14270 corp: 20/128b lim: 10 exec/s: 23 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:08:57.178 [2024-07-21 18:24:15.384506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:08:57.178 [2024-07-21 18:24:15.384539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.436 #24 NEW cov: 12117 ft: 14283 corp: 21/130b lim: 10 exec/s: 24 rss: 74Mb L: 2/10 MS: 1 CrossOver- 00:08:57.436 [2024-07-21 18:24:15.435074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b4ff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.435107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.435173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.435193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.435268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fffb cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.435287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.435354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.435373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.436 #25 NEW cov: 12117 ft: 14302 corp: 22/139b lim: 10 exec/s: 25 rss: 74Mb L: 9/10 MS: 1 InsertByte- 00:08:57.436 [2024-07-21 18:24:15.485187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff27 cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.485228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.485296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000cff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.485316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.485384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.485403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.485468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff76 cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.485487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.436 #26 NEW cov: 12117 ft: 14305 corp: 23/148b lim: 10 exec/s: 26 rss: 74Mb L: 9/10 MS: 1 CrossOver- 00:08:57.436 [2024-07-21 18:24:15.535456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c6ff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.535489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.535558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.535577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.535643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.535662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.535728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.535747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.535813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000760a cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.535832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:57.436 #27 NEW cov: 12117 ft: 14323 corp: 24/158b lim: 10 exec/s: 27 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:08:57.436 [2024-07-21 18:24:15.605569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.605602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.605670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.605690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.605757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000009ff cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.605776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.436 [2024-07-21 18:24:15.605841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff76 cdw11:00000000 00:08:57.436 [2024-07-21 18:24:15.605864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.694 #28 NEW cov: 12117 ft: 14345 corp: 25/167b lim: 10 exec/s: 28 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:08:57.694 [2024-07-21 18:24:15.675904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.675940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.694 [2024-07-21 18:24:15.676007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.676026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.694 [2024-07-21 18:24:15.676091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.676111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.694 [2024-07-21 18:24:15.676175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.676194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.694 [2024-07-21 18:24:15.676268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000760a cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.676288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:57.694 #29 NEW cov: 12117 ft: 14362 corp: 26/177b lim: 10 exec/s: 29 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:08:57.694 [2024-07-21 18:24:15.745683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.745717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.694 [2024-07-21 18:24:15.745784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.694 [2024-07-21 18:24:15.745804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.694 #30 NEW cov: 12117 ft: 14443 corp: 27/181b lim: 10 exec/s: 15 rss: 74Mb L: 4/10 MS: 1 CrossOver- 00:08:57.694 #30 DONE cov: 12117 ft: 14443 corp: 27/181b lim: 10 exec/s: 15 rss: 74Mb 00:08:57.694 ###### Recommended dictionary. ###### 00:08:57.694 "\006\000" # Uses: 0 00:08:57.694 ###### End of recommended dictionary. ###### 00:08:57.694 Done 30 runs in 2 second(s) 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:57.953 18:24:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:08:57.953 [2024-07-21 18:24:15.992553] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:57.953 [2024-07-21 18:24:15.992629] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819983 ] 00:08:57.953 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.211 [2024-07-21 18:24:16.238401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.211 [2024-07-21 18:24:16.327382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.211 [2024-07-21 18:24:16.391605] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.211 [2024-07-21 18:24:16.407842] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:58.211 INFO: Running with entropic power schedule (0xFF, 100). 00:08:58.211 INFO: Seed: 3473592795 00:08:58.469 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:08:58.469 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:08:58.469 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:58.469 INFO: A corpus is not provided, starting from an empty corpus 00:08:58.469 [2024-07-21 18:24:16.473504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.469 [2024-07-21 18:24:16.473545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.469 #2 INITED cov: 11901 ft: 11902 corp: 1/1b exec/s: 0 rss: 71Mb 00:08:58.469 [2024-07-21 18:24:16.523597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.469 [2024-07-21 18:24:16.523632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.469 [2024-07-21 18:24:16.523702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.469 [2024-07-21 18:24:16.523722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.469 #3 NEW cov: 12031 ft: 12969 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:08:58.469 [2024-07-21 18:24:16.593813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.469 [2024-07-21 18:24:16.593846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.469 [2024-07-21 18:24:16.593916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.469 [2024-07-21 18:24:16.593940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.469 #4 NEW cov: 12037 ft: 13237 corp: 3/5b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:08:58.469 [2024-07-21 18:24:16.663821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.469 [2024-07-21 18:24:16.663855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.728 #5 NEW cov: 12122 ft: 13618 corp: 4/6b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 CrossOver- 00:08:58.728 [2024-07-21 18:24:16.713990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.728 [2024-07-21 18:24:16.714023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.728 #6 NEW cov: 12122 ft: 13728 corp: 5/7b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ShuffleBytes- 00:08:58.728 [2024-07-21 18:24:16.784145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.728 [2024-07-21 18:24:16.784179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.728 #7 NEW cov: 12122 ft: 13857 corp: 6/8b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeBit- 00:08:58.728 [2024-07-21 18:24:16.834328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.728 [2024-07-21 18:24:16.834362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.728 #8 NEW cov: 12122 ft: 13916 corp: 7/9b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeByte- 00:08:58.728 [2024-07-21 18:24:16.904501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.728 [2024-07-21 18:24:16.904535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.728 #9 NEW cov: 12122 ft: 13978 corp: 8/10b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeBit- 00:08:58.986 [2024-07-21 18:24:16.955341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:16.955374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.986 [2024-07-21 18:24:16.955444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:16.955463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.986 [2024-07-21 18:24:16.955533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:16.955551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.986 [2024-07-21 18:24:16.955618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:16.955637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.986 [2024-07-21 18:24:16.955705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:16.955728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:58.986 #10 NEW cov: 12122 ft: 14352 corp: 9/15b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:08:58.986 [2024-07-21 18:24:17.004781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:17.004814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.986 #11 NEW cov: 12122 ft: 14389 corp: 10/16b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:58.986 [2024-07-21 18:24:17.054884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:17.054917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.986 #12 NEW cov: 12122 ft: 14449 corp: 11/17b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBinInt- 00:08:58.986 [2024-07-21 18:24:17.105037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:17.105072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.986 #13 NEW cov: 12122 ft: 14494 corp: 12/18b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:08:58.986 [2024-07-21 18:24:17.175234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.986 [2024-07-21 18:24:17.175267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.244 #14 NEW cov: 12122 ft: 14509 corp: 13/19b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:59.244 [2024-07-21 18:24:17.245633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.244 [2024-07-21 18:24:17.245666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.244 [2024-07-21 18:24:17.245736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.244 [2024-07-21 18:24:17.245757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.244 #15 NEW cov: 12122 ft: 14560 corp: 14/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:08:59.244 [2024-07-21 18:24:17.295763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.244 [2024-07-21 18:24:17.295795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.244 [2024-07-21 18:24:17.295866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.244 [2024-07-21 18:24:17.295886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.811 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:59.811 #16 NEW cov: 12145 ft: 14583 corp: 15/23b lim: 5 exec/s: 16 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:08:59.811 [2024-07-21 18:24:17.809231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.809286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.811 [2024-07-21 18:24:17.809401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.809425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.811 #17 NEW cov: 12145 ft: 14624 corp: 16/25b lim: 5 exec/s: 17 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:08:59.811 [2024-07-21 18:24:17.880130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.880169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.811 [2024-07-21 18:24:17.880279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.880302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.811 [2024-07-21 18:24:17.880397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.880418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.811 [2024-07-21 18:24:17.880514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.880537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.811 [2024-07-21 18:24:17.880635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.880657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:59.811 #18 NEW cov: 12145 ft: 14762 corp: 17/30b lim: 5 exec/s: 18 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:08:59.811 [2024-07-21 18:24:17.969132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.969165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.811 [2024-07-21 18:24:17.969271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.811 [2024-07-21 18:24:17.969294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.811 #19 NEW cov: 12145 ft: 14793 corp: 18/32b lim: 5 exec/s: 19 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:09:00.069 [2024-07-21 18:24:18.049440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.049475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.069 [2024-07-21 18:24:18.049573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.049597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.069 #20 NEW cov: 12145 ft: 14804 corp: 19/34b lim: 5 exec/s: 20 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:09:00.069 [2024-07-21 18:24:18.129783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.129821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.069 [2024-07-21 18:24:18.129918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.129940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.069 #21 NEW cov: 12145 ft: 14806 corp: 20/36b lim: 5 exec/s: 21 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:09:00.069 [2024-07-21 18:24:18.189950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.189983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.069 [2024-07-21 18:24:18.190083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.190106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.069 #22 NEW cov: 12145 ft: 14854 corp: 21/38b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:09:00.069 [2024-07-21 18:24:18.270190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.270229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.069 [2024-07-21 18:24:18.270330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.069 [2024-07-21 18:24:18.270353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.328 #23 NEW cov: 12145 ft: 14862 corp: 22/40b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:09:00.328 [2024-07-21 18:24:18.330461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.328 [2024-07-21 18:24:18.330495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.328 [2024-07-21 18:24:18.330592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.328 [2024-07-21 18:24:18.330614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.328 #24 NEW cov: 12145 ft: 14915 corp: 23/42b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:09:00.328 [2024-07-21 18:24:18.390679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.328 [2024-07-21 18:24:18.390714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.328 [2024-07-21 18:24:18.390813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.328 [2024-07-21 18:24:18.390835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.328 #25 NEW cov: 12145 ft: 14935 corp: 24/44b lim: 5 exec/s: 25 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:09:00.328 [2024-07-21 18:24:18.450950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.328 [2024-07-21 18:24:18.450989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.328 [2024-07-21 18:24:18.451085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.328 [2024-07-21 18:24:18.451107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.328 #26 NEW cov: 12145 ft: 14946 corp: 25/46b lim: 5 exec/s: 13 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:09:00.328 #26 DONE cov: 12145 ft: 14946 corp: 25/46b lim: 5 exec/s: 13 rss: 73Mb 00:09:00.328 Done 26 runs in 2 second(s) 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:00.586 18:24:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:09:00.586 [2024-07-21 18:24:18.676278] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:00.586 [2024-07-21 18:24:18.676352] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820339 ] 00:09:00.586 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.844 [2024-07-21 18:24:18.934452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.844 [2024-07-21 18:24:19.025530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.103 [2024-07-21 18:24:19.090057] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.103 [2024-07-21 18:24:19.106291] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:09:01.103 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.103 INFO: Seed: 1876623466 00:09:01.103 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:01.103 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:01.103 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:01.103 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.103 [2024-07-21 18:24:19.183807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.103 [2024-07-21 18:24:19.183859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.103 #2 INITED cov: 11901 ft: 11896 corp: 1/1b exec/s: 0 rss: 71Mb 00:09:01.103 [2024-07-21 18:24:19.243861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.103 [2024-07-21 18:24:19.243900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.103 #3 NEW cov: 12031 ft: 12301 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:09:01.362 [2024-07-21 18:24:19.325177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.325216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.325322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.325345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.325443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.325464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.325563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.325585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.362 #4 NEW cov: 12037 ft: 13464 corp: 3/6b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:01.362 [2024-07-21 18:24:19.415381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.415415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.415511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.415534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.415637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.415659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.362 #5 NEW cov: 12122 ft: 13890 corp: 4/9b lim: 5 exec/s: 0 rss: 72Mb L: 3/4 MS: 1 CMP- DE: "\000\000"- 00:09:01.362 [2024-07-21 18:24:19.485244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.485278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.485382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.485404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.362 #6 NEW cov: 12122 ft: 14134 corp: 5/11b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CrossOver- 00:09:01.362 [2024-07-21 18:24:19.545441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.545475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.362 [2024-07-21 18:24:19.545575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.362 [2024-07-21 18:24:19.545598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.620 #7 NEW cov: 12122 ft: 14245 corp: 6/13b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CopyPart- 00:09:01.620 [2024-07-21 18:24:19.605336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.620 [2024-07-21 18:24:19.605372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.620 #8 NEW cov: 12122 ft: 14343 corp: 7/14b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeBit- 00:09:01.620 [2024-07-21 18:24:19.666667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.620 [2024-07-21 18:24:19.666701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.620 [2024-07-21 18:24:19.666797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.620 [2024-07-21 18:24:19.666822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.620 [2024-07-21 18:24:19.666916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.620 [2024-07-21 18:24:19.666939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.620 [2024-07-21 18:24:19.667040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.620 [2024-07-21 18:24:19.667062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.620 #9 NEW cov: 12122 ft: 14409 corp: 8/18b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertByte- 00:09:01.620 [2024-07-21 18:24:19.757274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.621 [2024-07-21 18:24:19.757308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.621 [2024-07-21 18:24:19.757407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.621 [2024-07-21 18:24:19.757430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.621 [2024-07-21 18:24:19.757524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.621 [2024-07-21 18:24:19.757552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.621 [2024-07-21 18:24:19.757647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.621 [2024-07-21 18:24:19.757668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.621 [2024-07-21 18:24:19.757759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.621 [2024-07-21 18:24:19.757780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.621 #10 NEW cov: 12122 ft: 14501 corp: 9/23b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:09:01.879 [2024-07-21 18:24:19.847549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.847583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:19.847682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.847704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:19.847801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.847825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:19.847926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.847949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:19.848046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.848069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.879 #11 NEW cov: 12122 ft: 14580 corp: 10/28b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:01.879 [2024-07-21 18:24:19.936806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.936841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:19.936946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:19.936970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.879 #12 NEW cov: 12122 ft: 14599 corp: 11/30b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:09:01.879 [2024-07-21 18:24:20.018615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:20.018651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:20.018751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:20.018777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:20.018874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:20.018898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:20.018989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:20.019013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.879 [2024-07-21 18:24:20.019107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.879 [2024-07-21 18:24:20.019129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.446 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:02.446 #13 NEW cov: 12145 ft: 14673 corp: 12/35b lim: 5 exec/s: 13 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:09:02.446 [2024-07-21 18:24:20.519842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.446 [2024-07-21 18:24:20.519896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.446 [2024-07-21 18:24:20.520003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.446 [2024-07-21 18:24:20.520025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.446 [2024-07-21 18:24:20.520129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.446 [2024-07-21 18:24:20.520153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.446 [2024-07-21 18:24:20.520261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.446 [2024-07-21 18:24:20.520284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.446 [2024-07-21 18:24:20.520391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.446 [2024-07-21 18:24:20.520414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.446 #14 NEW cov: 12145 ft: 14699 corp: 13/40b lim: 5 exec/s: 14 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:09:02.446 [2024-07-21 18:24:20.609517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.446 [2024-07-21 18:24:20.609557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.447 [2024-07-21 18:24:20.609656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.447 [2024-07-21 18:24:20.609680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.447 [2024-07-21 18:24:20.609781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.447 [2024-07-21 18:24:20.609810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.447 #15 NEW cov: 12145 ft: 14719 corp: 14/43b lim: 5 exec/s: 15 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:09:02.705 [2024-07-21 18:24:20.670602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.670637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.705 [2024-07-21 18:24:20.670747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.670771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.705 [2024-07-21 18:24:20.670871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.670895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.705 [2024-07-21 18:24:20.670989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.671012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.705 [2024-07-21 18:24:20.671113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.671137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.705 #16 NEW cov: 12145 ft: 14752 corp: 15/48b lim: 5 exec/s: 16 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:09:02.705 [2024-07-21 18:24:20.759542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.759580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.705 #17 NEW cov: 12145 ft: 14764 corp: 16/49b lim: 5 exec/s: 17 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:02.705 [2024-07-21 18:24:20.840522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.840558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.705 [2024-07-21 18:24:20.840661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.705 [2024-07-21 18:24:20.840684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.705 #18 NEW cov: 12145 ft: 14771 corp: 17/51b lim: 5 exec/s: 18 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:09:02.965 [2024-07-21 18:24:20.920950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.965 [2024-07-21 18:24:20.920986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.965 [2024-07-21 18:24:20.921090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.965 [2024-07-21 18:24:20.921114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.965 #19 NEW cov: 12145 ft: 14786 corp: 18/53b lim: 5 exec/s: 19 rss: 74Mb L: 2/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:09:02.965 [2024-07-21 18:24:20.981890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.965 [2024-07-21 18:24:20.981925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.965 [2024-07-21 18:24:20.982032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.965 [2024-07-21 18:24:20.982055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.965 [2024-07-21 18:24:20.982153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.965 [2024-07-21 18:24:20.982177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.965 #20 NEW cov: 12145 ft: 14802 corp: 19/56b lim: 5 exec/s: 20 rss: 74Mb L: 3/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:09:02.965 [2024-07-21 18:24:21.072977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.966 [2024-07-21 18:24:21.073011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.966 [2024-07-21 18:24:21.073115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.966 [2024-07-21 18:24:21.073138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.966 [2024-07-21 18:24:21.073248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.966 [2024-07-21 18:24:21.073272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.966 [2024-07-21 18:24:21.073373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.966 [2024-07-21 18:24:21.073395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.966 [2024-07-21 18:24:21.073499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.966 [2024-07-21 18:24:21.073523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.966 #21 NEW cov: 12145 ft: 14876 corp: 20/61b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:09:02.966 [2024-07-21 18:24:21.131830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.966 [2024-07-21 18:24:21.131866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.966 #22 NEW cov: 12145 ft: 14904 corp: 21/62b lim: 5 exec/s: 11 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:09:02.966 #22 DONE cov: 12145 ft: 14904 corp: 21/62b lim: 5 exec/s: 11 rss: 74Mb 00:09:02.966 ###### Recommended dictionary. ###### 00:09:02.966 "\000\000" # Uses: 2 00:09:02.966 ###### End of recommended dictionary. ###### 00:09:02.966 Done 22 runs in 2 second(s) 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:03.225 18:24:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:09:03.225 [2024-07-21 18:24:21.358745] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:03.226 [2024-07-21 18:24:21.358819] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820698 ] 00:09:03.226 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.552 [2024-07-21 18:24:21.596386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.552 [2024-07-21 18:24:21.684856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.811 [2024-07-21 18:24:21.749076] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.811 [2024-07-21 18:24:21.765322] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:09:03.811 INFO: Running with entropic power schedule (0xFF, 100). 00:09:03.811 INFO: Seed: 240666157 00:09:03.811 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:03.812 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:03.812 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:03.812 INFO: A corpus is not provided, starting from an empty corpus 00:09:03.812 #2 INITED exec/s: 0 rss: 65Mb 00:09:03.812 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:03.812 This may also happen if the target rejected all inputs we tried so far 00:09:03.812 [2024-07-21 18:24:21.833588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.812 [2024-07-21 18:24:21.833638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.812 [2024-07-21 18:24:21.833753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.812 [2024-07-21 18:24:21.833782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.812 [2024-07-21 18:24:21.833899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.812 [2024-07-21 18:24:21.833923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.812 [2024-07-21 18:24:21.834036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.812 [2024-07-21 18:24:21.834061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.072 NEW_FUNC[1/697]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:09:04.072 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:04.072 #11 NEW cov: 11924 ft: 11918 corp: 2/35b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 4 CopyPart-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:09:04.072 [2024-07-21 18:24:22.194293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.194350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.072 [2024-07-21 18:24:22.194462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.194488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.072 [2024-07-21 18:24:22.194601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.194628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.072 [2024-07-21 18:24:22.194740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.194767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.072 #12 NEW cov: 12054 ft: 12536 corp: 3/69b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:09:04.072 [2024-07-21 18:24:22.284584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ac2caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.284622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.072 [2024-07-21 18:24:22.284726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.284750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.072 [2024-07-21 18:24:22.284851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.284874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.072 [2024-07-21 18:24:22.284977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.072 [2024-07-21 18:24:22.285004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.330 #13 NEW cov: 12060 ft: 12779 corp: 4/103b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBit- 00:09:04.330 [2024-07-21 18:24:22.345099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.345135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.345237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.345261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.345368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cac9caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.345394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.345502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.345525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.330 #14 NEW cov: 12145 ft: 13007 corp: 5/138b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:09:04.330 [2024-07-21 18:24:22.425498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.425532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.425632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.425654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.425764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.425785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.425891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacaca0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.425911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.330 #15 NEW cov: 12145 ft: 13161 corp: 6/173b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:09:04.330 [2024-07-21 18:24:22.485493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.485528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.330 [2024-07-21 18:24:22.485635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.330 [2024-07-21 18:24:22.485658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.330 #16 NEW cov: 12145 ft: 13700 corp: 7/191b lim: 40 exec/s: 0 rss: 73Mb L: 18/35 MS: 1 InsertRepeatedBytes- 00:09:04.588 [2024-07-21 18:24:22.556368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.556408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.556516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.556540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.556641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacaca36 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.556663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.556763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:35353535 cdw11:353536ca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.556785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.588 #17 NEW cov: 12145 ft: 13765 corp: 8/225b lim: 40 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 ChangeBinInt- 00:09:04.588 [2024-07-21 18:24:22.616756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.616791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.616893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.616915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.617014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.617037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.617145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.617168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.588 #18 NEW cov: 12145 ft: 13798 corp: 9/259b lim: 40 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 CopyPart- 00:09:04.588 [2024-07-21 18:24:22.676940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.676975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.677078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.677102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.677204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.677234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.677351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.677378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.588 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:04.588 #24 NEW cov: 12168 ft: 13831 corp: 10/297b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 CrossOver- 00:09:04.588 [2024-07-21 18:24:22.766783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ac2caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.766819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.588 [2024-07-21 18:24:22.766927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.588 [2024-07-21 18:24:22.766950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.856 #25 NEW cov: 12168 ft: 13964 corp: 11/319b lim: 40 exec/s: 25 rss: 73Mb L: 22/38 MS: 1 EraseBytes- 00:09:04.856 [2024-07-21 18:24:22.847397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.847431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.847542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ca0acaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.847567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.847675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.847698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.856 #26 NEW cov: 12168 ft: 14162 corp: 12/343b lim: 40 exec/s: 26 rss: 73Mb L: 24/38 MS: 1 CrossOver- 00:09:04.856 [2024-07-21 18:24:22.907825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.907861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.907965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:caca4eca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.907988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.908091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.908114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.908218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.908240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.856 #27 NEW cov: 12168 ft: 14175 corp: 13/378b lim: 40 exec/s: 27 rss: 73Mb L: 35/38 MS: 1 InsertByte- 00:09:04.856 [2024-07-21 18:24:22.967921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacaca56 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.967963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.968076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:56565656 cdw11:56565656 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.968101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.968217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:caca0aca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.968241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:22.968343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:22.968365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.856 #28 NEW cov: 12168 ft: 14220 corp: 14/411b lim: 40 exec/s: 28 rss: 73Mb L: 33/38 MS: 1 InsertRepeatedBytes- 00:09:04.856 [2024-07-21 18:24:23.048305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:23.048340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:23.048447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:caca4eca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:23.048470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:23.048573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:23.048595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.856 [2024-07-21 18:24:23.048704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:01000000 cdw11:00000048 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.856 [2024-07-21 18:24:23.048727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.120 #29 NEW cov: 12168 ft: 14295 corp: 15/446b lim: 40 exec/s: 29 rss: 73Mb L: 35/38 MS: 1 CMP- DE: "\001\000\000\000\000\000\000H"- 00:09:05.120 [2024-07-21 18:24:23.128498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.128535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.120 [2024-07-21 18:24:23.128641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ca0acaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.128666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.120 [2024-07-21 18:24:23.128768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.128791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.120 [2024-07-21 18:24:23.128891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.128918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.120 #30 NEW cov: 12168 ft: 14358 corp: 16/478b lim: 40 exec/s: 30 rss: 73Mb L: 32/38 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:09:05.120 [2024-07-21 18:24:23.188700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.188734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.120 [2024-07-21 18:24:23.188844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.188868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.120 [2024-07-21 18:24:23.188976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacaca30 cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.188998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.120 [2024-07-21 18:24:23.189099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.120 [2024-07-21 18:24:23.189123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.120 #31 NEW cov: 12168 ft: 14376 corp: 17/517b lim: 40 exec/s: 31 rss: 73Mb L: 39/39 MS: 1 InsertByte- 00:09:05.120 [2024-07-21 18:24:23.268132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0acaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.121 [2024-07-21 18:24:23.268169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.121 #32 NEW cov: 12168 ft: 14786 corp: 18/526b lim: 40 exec/s: 32 rss: 74Mb L: 9/39 MS: 1 CrossOver- 00:09:05.378 [2024-07-21 18:24:23.339493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acaffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.339529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.339638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.339663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.339765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.339788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.339891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.339915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.340014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.340038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.378 #33 NEW cov: 12168 ft: 14853 corp: 19/566b lim: 40 exec/s: 33 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:05.378 [2024-07-21 18:24:23.399460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ac2caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.399494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.399596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.399620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.399725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.399750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.399851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.399875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.378 #34 NEW cov: 12168 ft: 14867 corp: 20/603b lim: 40 exec/s: 34 rss: 74Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:09:05.378 [2024-07-21 18:24:23.459725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.459760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.459864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cac9caca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.459888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.459983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.460006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.460117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.460140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.378 #35 NEW cov: 12168 ft: 14940 corp: 21/638b lim: 40 exec/s: 35 rss: 74Mb L: 35/40 MS: 1 CopyPart- 00:09:05.378 [2024-07-21 18:24:23.539963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ac2caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.539999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.540105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.540128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.540235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.540259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.378 [2024-07-21 18:24:23.540362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.378 [2024-07-21 18:24:23.540385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.378 #36 NEW cov: 12168 ft: 15010 corp: 22/674b lim: 40 exec/s: 36 rss: 74Mb L: 36/40 MS: 1 CrossOver- 00:09:05.637 [2024-07-21 18:24:23.600084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ac2caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.600118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.600223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:dacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.600246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.600348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.600371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.600478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.600500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.637 #37 NEW cov: 12168 ft: 15025 corp: 23/710b lim: 40 exec/s: 37 rss: 74Mb L: 36/40 MS: 1 ChangeBit- 00:09:05.637 [2024-07-21 18:24:23.680560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ac2caca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.680593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.680694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.680717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.680819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.680841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.680940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacac8 cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.680965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.637 #38 NEW cov: 12168 ft: 15047 corp: 24/747b lim: 40 exec/s: 38 rss: 74Mb L: 37/40 MS: 1 ChangeBit- 00:09:05.637 [2024-07-21 18:24:23.760297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.760331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.760439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.760461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.637 #39 NEW cov: 12168 ft: 15063 corp: 25/769b lim: 40 exec/s: 39 rss: 74Mb L: 22/40 MS: 1 EraseBytes- 00:09:05.637 [2024-07-21 18:24:23.821114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.821149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.821257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.821280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.821388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cacaca30 cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.821412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.637 [2024-07-21 18:24:23.821522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.637 [2024-07-21 18:24:23.821545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.896 #40 NEW cov: 12168 ft: 15089 corp: 26/807b lim: 40 exec/s: 20 rss: 74Mb L: 38/40 MS: 1 EraseBytes- 00:09:05.896 #40 DONE cov: 12168 ft: 15089 corp: 26/807b lim: 40 exec/s: 20 rss: 74Mb 00:09:05.896 ###### Recommended dictionary. ###### 00:09:05.896 "\001\000\000\000\000\000\000H" # Uses: 0 00:09:05.896 "\377\377\377\377\377\377\377\377" # Uses: 0 00:09:05.896 ###### End of recommended dictionary. ###### 00:09:05.896 Done 40 runs in 2 second(s) 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:05.896 18:24:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:09:05.896 [2024-07-21 18:24:24.070808] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:05.896 [2024-07-21 18:24:24.070882] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3821062 ] 00:09:06.155 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.155 [2024-07-21 18:24:24.308141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.413 [2024-07-21 18:24:24.396955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.413 [2024-07-21 18:24:24.461332] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.413 [2024-07-21 18:24:24.477573] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:09:06.413 INFO: Running with entropic power schedule (0xFF, 100). 00:09:06.413 INFO: Seed: 2954660292 00:09:06.413 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:06.413 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:06.413 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:06.413 INFO: A corpus is not provided, starting from an empty corpus 00:09:06.413 #2 INITED exec/s: 0 rss: 65Mb 00:09:06.413 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:06.413 This may also happen if the target rejected all inputs we tried so far 00:09:06.413 [2024-07-21 18:24:24.555288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.413 [2024-07-21 18:24:24.555331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.413 [2024-07-21 18:24:24.555438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.413 [2024-07-21 18:24:24.555459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.982 NEW_FUNC[1/698]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:09:06.982 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:06.982 #10 NEW cov: 11936 ft: 11933 corp: 2/20b lim: 40 exec/s: 0 rss: 72Mb L: 19/19 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:09:06.982 [2024-07-21 18:24:25.036215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.982 [2024-07-21 18:24:25.036267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.982 [2024-07-21 18:24:25.036376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.982 [2024-07-21 18:24:25.036400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.982 #11 NEW cov: 12066 ft: 12574 corp: 3/39b lim: 40 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 CrossOver- 00:09:06.982 [2024-07-21 18:24:25.106352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.982 [2024-07-21 18:24:25.106384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.982 [2024-07-21 18:24:25.106479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.982 [2024-07-21 18:24:25.106499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.982 #12 NEW cov: 12072 ft: 12723 corp: 4/58b lim: 40 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:09:06.982 [2024-07-21 18:24:25.166498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:07ff7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.982 [2024-07-21 18:24:25.166526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.982 [2024-07-21 18:24:25.166611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.982 [2024-07-21 18:24:25.166628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.241 #13 NEW cov: 12157 ft: 12948 corp: 5/77b lim: 40 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:09:07.241 [2024-07-21 18:24:25.226751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.241 [2024-07-21 18:24:25.226778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.241 [2024-07-21 18:24:25.226864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.241 [2024-07-21 18:24:25.226883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.241 #14 NEW cov: 12157 ft: 12988 corp: 6/96b lim: 40 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:09:07.241 [2024-07-21 18:24:25.276956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.241 [2024-07-21 18:24:25.276984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.241 [2024-07-21 18:24:25.277079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff3dff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.241 [2024-07-21 18:24:25.277097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.241 #15 NEW cov: 12157 ft: 13172 corp: 7/116b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertByte- 00:09:07.241 [2024-07-21 18:24:25.327185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.241 [2024-07-21 18:24:25.327216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.242 [2024-07-21 18:24:25.327303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffdff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.242 [2024-07-21 18:24:25.327319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.242 #16 NEW cov: 12157 ft: 13337 corp: 8/135b lim: 40 exec/s: 0 rss: 73Mb L: 19/20 MS: 1 ChangeBinInt- 00:09:07.242 [2024-07-21 18:24:25.377369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.242 [2024-07-21 18:24:25.377394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.242 [2024-07-21 18:24:25.377481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff303dff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.242 [2024-07-21 18:24:25.377499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.242 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:07.242 #17 NEW cov: 12180 ft: 13353 corp: 9/155b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:09:07.242 [2024-07-21 18:24:25.437716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:07ff7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.242 [2024-07-21 18:24:25.437741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.242 [2024-07-21 18:24:25.437835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff29ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.242 [2024-07-21 18:24:25.437852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.501 #18 NEW cov: 12180 ft: 13372 corp: 10/175b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertByte- 00:09:07.501 [2024-07-21 18:24:25.498682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:07ff7eff cdw11:ffff07ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.498708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.498794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.498813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.498902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.498920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.498997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.499014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.501 #19 NEW cov: 12180 ft: 13743 corp: 11/211b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CopyPart- 00:09:07.501 [2024-07-21 18:24:25.549074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:07ff7eff cdw11:ffff07ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.549100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.549183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.549201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.549292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.549309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.549395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.549414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.501 #20 NEW cov: 12180 ft: 13795 corp: 12/247b lim: 40 exec/s: 20 rss: 73Mb L: 36/36 MS: 1 ChangeBinInt- 00:09:07.501 [2024-07-21 18:24:25.618728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.618772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.618866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:2cff303d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.618884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.501 #21 NEW cov: 12180 ft: 13831 corp: 13/268b lim: 40 exec/s: 21 rss: 73Mb L: 21/36 MS: 1 InsertByte- 00:09:07.501 [2024-07-21 18:24:25.679002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.679028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.501 [2024-07-21 18:24:25.679118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.501 [2024-07-21 18:24:25.679137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.760 #22 NEW cov: 12180 ft: 13848 corp: 14/287b lim: 40 exec/s: 22 rss: 73Mb L: 19/36 MS: 1 ChangeByte- 00:09:07.760 [2024-07-21 18:24:25.749600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.749626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.760 [2024-07-21 18:24:25.749712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.749730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.760 #23 NEW cov: 12180 ft: 13855 corp: 15/306b lim: 40 exec/s: 23 rss: 74Mb L: 19/36 MS: 1 CopyPart- 00:09:07.760 [2024-07-21 18:24:25.820008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:07ff7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.820036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.760 [2024-07-21 18:24:25.820129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff29ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.820146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.760 #24 NEW cov: 12180 ft: 13936 corp: 16/326b lim: 40 exec/s: 24 rss: 74Mb L: 20/36 MS: 1 CopyPart- 00:09:07.760 [2024-07-21 18:24:25.890461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.890489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.760 [2024-07-21 18:24:25.890583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.890601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.760 #25 NEW cov: 12180 ft: 14036 corp: 17/346b lim: 40 exec/s: 25 rss: 74Mb L: 20/36 MS: 1 InsertByte- 00:09:07.760 [2024-07-21 18:24:25.940704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.940733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.760 [2024-07-21 18:24:25.940831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffdff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.760 [2024-07-21 18:24:25.940850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.018 #26 NEW cov: 12180 ft: 14052 corp: 18/366b lim: 40 exec/s: 26 rss: 74Mb L: 20/36 MS: 1 InsertByte- 00:09:08.018 [2024-07-21 18:24:26.010722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:fffffff1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.018 [2024-07-21 18:24:26.010751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.018 #27 NEW cov: 12180 ft: 14785 corp: 19/376b lim: 40 exec/s: 27 rss: 74Mb L: 10/36 MS: 1 EraseBytes- 00:09:08.018 [2024-07-21 18:24:26.071390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.018 [2024-07-21 18:24:26.071417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.018 [2024-07-21 18:24:26.071508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.018 [2024-07-21 18:24:26.071526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.018 #28 NEW cov: 12180 ft: 14851 corp: 20/394b lim: 40 exec/s: 28 rss: 74Mb L: 18/36 MS: 1 EraseBytes- 00:09:08.018 [2024-07-21 18:24:26.121818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.018 [2024-07-21 18:24:26.121846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.019 [2024-07-21 18:24:26.121941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff29ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.019 [2024-07-21 18:24:26.121958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.019 #29 NEW cov: 12180 ft: 14868 corp: 21/414b lim: 40 exec/s: 29 rss: 74Mb L: 20/36 MS: 1 CopyPart- 00:09:08.019 [2024-07-21 18:24:26.192780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffff07ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.019 [2024-07-21 18:24:26.192807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.019 [2024-07-21 18:24:26.192897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.019 [2024-07-21 18:24:26.192913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.019 [2024-07-21 18:24:26.193006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.019 [2024-07-21 18:24:26.193024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.019 [2024-07-21 18:24:26.193112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.019 [2024-07-21 18:24:26.193131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.019 #30 NEW cov: 12180 ft: 14894 corp: 22/450b lim: 40 exec/s: 30 rss: 74Mb L: 36/36 MS: 1 CrossOver- 00:09:08.277 [2024-07-21 18:24:26.242266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.242294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.242389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.242406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.277 #31 NEW cov: 12180 ft: 14938 corp: 23/469b lim: 40 exec/s: 31 rss: 74Mb L: 19/36 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:09:08.277 [2024-07-21 18:24:26.293236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7effffff cdw11:ffff9797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.293261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.293364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.293380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.293467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.293484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.293578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff29ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.293597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.277 #32 NEW cov: 12180 ft: 14951 corp: 24/504b lim: 40 exec/s: 32 rss: 74Mb L: 35/36 MS: 1 InsertRepeatedBytes- 00:09:08.277 [2024-07-21 18:24:26.352734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.352759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.352854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.352872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.277 #33 NEW cov: 12180 ft: 14970 corp: 25/521b lim: 40 exec/s: 33 rss: 74Mb L: 17/36 MS: 1 EraseBytes- 00:09:08.277 [2024-07-21 18:24:26.412960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.412986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.413079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.413096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.277 #34 NEW cov: 12180 ft: 15062 corp: 26/540b lim: 40 exec/s: 34 rss: 74Mb L: 19/36 MS: 1 ShuffleBytes- 00:09:08.277 [2024-07-21 18:24:26.463849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7effffff cdw11:ffff9797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.463875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.463965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.463982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.464074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.464091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.277 [2024-07-21 18:24:26.464186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff29ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.277 [2024-07-21 18:24:26.464203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.535 #35 NEW cov: 12180 ft: 15082 corp: 27/575b lim: 40 exec/s: 35 rss: 74Mb L: 35/36 MS: 1 ShuffleBytes- 00:09:08.535 [2024-07-21 18:24:26.524115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:070a7eff cdw11:ffff07ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.535 [2024-07-21 18:24:26.524141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.535 [2024-07-21 18:24:26.524234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.535 [2024-07-21 18:24:26.524250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.535 [2024-07-21 18:24:26.524344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.535 [2024-07-21 18:24:26.524360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.535 [2024-07-21 18:24:26.524454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.535 [2024-07-21 18:24:26.524472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.535 #36 NEW cov: 12180 ft: 15084 corp: 28/614b lim: 40 exec/s: 18 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:09:08.535 #36 DONE cov: 12180 ft: 15084 corp: 28/614b lim: 40 exec/s: 18 rss: 74Mb 00:09:08.535 ###### Recommended dictionary. ###### 00:09:08.535 "\000\000\000\000\000\000\000\000" # Uses: 0 00:09:08.535 ###### End of recommended dictionary. ###### 00:09:08.535 Done 36 runs in 2 second(s) 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:08.535 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:08.536 18:24:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:09:08.795 [2024-07-21 18:24:26.754109] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:08.795 [2024-07-21 18:24:26.754194] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3821415 ] 00:09:08.795 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.795 [2024-07-21 18:24:26.991395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.054 [2024-07-21 18:24:27.082546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.054 [2024-07-21 18:24:27.146736] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.054 [2024-07-21 18:24:27.162974] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:09:09.054 INFO: Running with entropic power schedule (0xFF, 100). 00:09:09.054 INFO: Seed: 1342694389 00:09:09.054 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:09.054 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:09.054 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:09.054 INFO: A corpus is not provided, starting from an empty corpus 00:09:09.054 #2 INITED exec/s: 0 rss: 65Mb 00:09:09.054 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:09.054 This may also happen if the target rejected all inputs we tried so far 00:09:09.054 [2024-07-21 18:24:27.221983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.054 [2024-07-21 18:24:27.222021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.571 NEW_FUNC[1/697]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:09:09.571 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:09.571 #7 NEW cov: 11926 ft: 11933 corp: 2/12b lim: 40 exec/s: 0 rss: 72Mb L: 11/11 MS: 5 InsertByte-CrossOver-EraseBytes-CrossOver-CMP- DE: "N\274\255\301\220\360+\000"- 00:09:09.571 [2024-07-21 18:24:27.713419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.571 [2024-07-21 18:24:27.713471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.571 [2024-07-21 18:24:27.713541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000a4ebc cdw11:adc190f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.571 [2024-07-21 18:24:27.713561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.571 NEW_FUNC[1/1]: 0x1da24d0 in spdk_thread_get_last_tsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1324 00:09:09.571 #8 NEW cov: 12064 ft: 13174 corp: 3/31b lim: 40 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:09:09.829 [2024-07-21 18:24:27.793327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.793363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.829 #9 NEW cov: 12070 ft: 13391 corp: 4/43b lim: 40 exec/s: 0 rss: 73Mb L: 12/19 MS: 1 CrossOver- 00:09:09.829 [2024-07-21 18:24:27.843784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190ff4e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.843819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.829 [2024-07-21 18:24:27.843889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bcadc190 cdw11:f02b000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.843910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.829 [2024-07-21 18:24:27.843977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0af02b00 cdw11:0a4e0abc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.843997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.829 #10 NEW cov: 12155 ft: 13925 corp: 5/67b lim: 40 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 CrossOver- 00:09:09.829 [2024-07-21 18:24:27.914180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.914221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.829 [2024-07-21 18:24:27.914293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000a0a0a cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.914313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.829 [2024-07-21 18:24:27.914383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:79797979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.914403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.829 [2024-07-21 18:24:27.914474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:79797979 cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.829 [2024-07-21 18:24:27.914493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.829 #11 NEW cov: 12155 ft: 14322 corp: 6/104b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:09:09.830 [2024-07-21 18:24:27.983885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.830 [2024-07-21 18:24:27.983921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.830 #12 NEW cov: 12155 ft: 14444 corp: 7/116b lim: 40 exec/s: 0 rss: 73Mb L: 12/37 MS: 1 ChangeBinInt- 00:09:09.830 [2024-07-21 18:24:28.034289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.830 [2024-07-21 18:24:28.034325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.830 [2024-07-21 18:24:28.034400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00343434 cdw11:34343434 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.830 [2024-07-21 18:24:28.034421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.830 [2024-07-21 18:24:28.034489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:3434340a cdw11:4ebcadc1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.830 [2024-07-21 18:24:28.034509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.087 #13 NEW cov: 12155 ft: 14497 corp: 8/145b lim: 40 exec/s: 0 rss: 73Mb L: 29/37 MS: 1 InsertRepeatedBytes- 00:09:10.087 [2024-07-21 18:24:28.084079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f04e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.088 [2024-07-21 18:24:28.084113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.088 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:10.088 #14 NEW cov: 12178 ft: 14620 corp: 9/159b lim: 40 exec/s: 0 rss: 73Mb L: 14/37 MS: 1 CopyPart- 00:09:10.088 [2024-07-21 18:24:28.134230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c194f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.088 [2024-07-21 18:24:28.134265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.088 #15 NEW cov: 12178 ft: 14700 corp: 10/171b lim: 40 exec/s: 0 rss: 73Mb L: 12/37 MS: 1 ChangeBit- 00:09:10.088 [2024-07-21 18:24:28.204431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c19490ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.088 [2024-07-21 18:24:28.204465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.088 #16 NEW cov: 12178 ft: 14830 corp: 11/183b lim: 40 exec/s: 16 rss: 73Mb L: 12/37 MS: 1 CrossOver- 00:09:10.088 [2024-07-21 18:24:28.274628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c194f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.088 [2024-07-21 18:24:28.274661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.346 #17 NEW cov: 12178 ft: 14856 corp: 12/195b lim: 40 exec/s: 17 rss: 73Mb L: 12/37 MS: 1 CrossOver- 00:09:10.346 [2024-07-21 18:24:28.324774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.346 [2024-07-21 18:24:28.324808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.346 #18 NEW cov: 12178 ft: 14868 corp: 13/207b lim: 40 exec/s: 18 rss: 73Mb L: 12/37 MS: 1 ChangeBit- 00:09:10.346 [2024-07-21 18:24:28.374965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c1f04ebc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.346 [2024-07-21 18:24:28.375000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.346 #19 NEW cov: 12178 ft: 14890 corp: 14/221b lim: 40 exec/s: 19 rss: 73Mb L: 14/37 MS: 1 CopyPart- 00:09:10.346 [2024-07-21 18:24:28.445154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.346 [2024-07-21 18:24:28.445189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.346 #20 NEW cov: 12178 ft: 14899 corp: 15/233b lim: 40 exec/s: 20 rss: 73Mb L: 12/37 MS: 1 CopyPart- 00:09:10.346 [2024-07-21 18:24:28.495636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.346 [2024-07-21 18:24:28.495671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.346 [2024-07-21 18:24:28.495741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.346 [2024-07-21 18:24:28.495761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.346 [2024-07-21 18:24:28.495828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:4ebcadc1 cdw11:90f02b00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.346 [2024-07-21 18:24:28.495847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.346 #21 NEW cov: 12178 ft: 14915 corp: 16/260b lim: 40 exec/s: 21 rss: 73Mb L: 27/37 MS: 1 InsertRepeatedBytes- 00:09:10.605 [2024-07-21 18:24:28.565468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c194f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.565504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.605 #22 NEW cov: 12178 ft: 14943 corp: 17/272b lim: 40 exec/s: 22 rss: 73Mb L: 12/37 MS: 1 CopyPart- 00:09:10.605 [2024-07-21 18:24:28.615776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190ff4e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.615811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.605 [2024-07-21 18:24:28.615883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bcadf02b cdw11:000a4e0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.615903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.605 #23 NEW cov: 12178 ft: 14999 corp: 18/289b lim: 40 exec/s: 23 rss: 73Mb L: 17/37 MS: 1 EraseBytes- 00:09:10.605 [2024-07-21 18:24:28.685822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c1ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.685856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.605 #24 NEW cov: 12178 ft: 15014 corp: 19/301b lim: 40 exec/s: 24 rss: 73Mb L: 12/37 MS: 1 CMP- DE: "\377\377\377\377"- 00:09:10.605 [2024-07-21 18:24:28.736145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c194f0ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.736179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.605 [2024-07-21 18:24:28.736254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:4ebcad2b cdw11:00c194f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.736275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.605 #25 NEW cov: 12178 ft: 15025 corp: 20/321b lim: 40 exec/s: 25 rss: 73Mb L: 20/37 MS: 1 CrossOver- 00:09:10.605 [2024-07-21 18:24:28.806685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.806718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.605 [2024-07-21 18:24:28.806789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000a0a0a cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.806813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.605 [2024-07-21 18:24:28.806880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.806899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.605 [2024-07-21 18:24:28.806963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:000afc0a cdw11:79797979 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.605 [2024-07-21 18:24:28.806981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.864 #26 NEW cov: 12178 ft: 15066 corp: 21/358b lim: 40 exec/s: 26 rss: 74Mb L: 37/37 MS: 1 CrossOver- 00:09:10.864 [2024-07-21 18:24:28.876344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c1f04ebc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.864 [2024-07-21 18:24:28.876378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.864 #27 NEW cov: 12178 ft: 15071 corp: 22/372b lim: 40 exec/s: 27 rss: 74Mb L: 14/37 MS: 1 ChangeByte- 00:09:10.864 [2024-07-21 18:24:28.946510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2dbcadc1 cdw11:94f02b00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.864 [2024-07-21 18:24:28.946543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.864 #31 NEW cov: 12178 ft: 15096 corp: 23/383b lim: 40 exec/s: 31 rss: 74Mb L: 11/37 MS: 4 ChangeBit-ChangeByte-ShuffleBytes-CrossOver- 00:09:10.864 [2024-07-21 18:24:28.997238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.864 [2024-07-21 18:24:28.997272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.864 [2024-07-21 18:24:28.997341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000acbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.864 [2024-07-21 18:24:28.997361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.865 [2024-07-21 18:24:28.997431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:cbcbcbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.865 [2024-07-21 18:24:28.997451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.865 [2024-07-21 18:24:28.997517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.865 [2024-07-21 18:24:28.997536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.865 #32 NEW cov: 12178 ft: 15110 corp: 24/416b lim: 40 exec/s: 32 rss: 74Mb L: 33/37 MS: 1 InsertRepeatedBytes- 00:09:10.865 [2024-07-21 18:24:29.047174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c190f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.865 [2024-07-21 18:24:29.047209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.865 [2024-07-21 18:24:29.047248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00343434 cdw11:34343431 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.865 [2024-07-21 18:24:29.047266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.865 [2024-07-21 18:24:29.047299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:3337300a cdw11:4ebcadc1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.865 [2024-07-21 18:24:29.047315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.124 #33 NEW cov: 12187 ft: 15134 corp: 25/445b lim: 40 exec/s: 33 rss: 74Mb L: 29/37 MS: 1 ChangeASCIIInt- 00:09:11.124 [2024-07-21 18:24:29.117009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c1ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.124 [2024-07-21 18:24:29.117043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.124 #34 NEW cov: 12187 ft: 15146 corp: 26/457b lim: 40 exec/s: 34 rss: 74Mb L: 12/37 MS: 1 ChangeBinInt- 00:09:11.124 [2024-07-21 18:24:29.187195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff4ebcad cdw11:c194f02b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.124 [2024-07-21 18:24:29.187236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.124 #35 NEW cov: 12187 ft: 15156 corp: 27/470b lim: 40 exec/s: 17 rss: 74Mb L: 13/37 MS: 1 InsertByte- 00:09:11.124 #35 DONE cov: 12187 ft: 15156 corp: 27/470b lim: 40 exec/s: 17 rss: 74Mb 00:09:11.124 ###### Recommended dictionary. ###### 00:09:11.124 "N\274\255\301\220\360+\000" # Uses: 0 00:09:11.124 "\377\377\377\377" # Uses: 0 00:09:11.124 ###### End of recommended dictionary. ###### 00:09:11.124 Done 35 runs in 2 second(s) 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:11.382 18:24:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:09:11.382 [2024-07-21 18:24:29.426308] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:11.382 [2024-07-21 18:24:29.426414] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3821777 ] 00:09:11.382 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.641 [2024-07-21 18:24:29.748082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.641 [2024-07-21 18:24:29.851076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.899 [2024-07-21 18:24:29.915423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.899 [2024-07-21 18:24:29.931663] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:09:11.899 INFO: Running with entropic power schedule (0xFF, 100). 00:09:11.899 INFO: Seed: 4113694759 00:09:11.899 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:11.899 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:11.899 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:11.899 INFO: A corpus is not provided, starting from an empty corpus 00:09:11.899 #2 INITED exec/s: 0 rss: 65Mb 00:09:11.899 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:11.899 This may also happen if the target rejected all inputs we tried so far 00:09:11.899 [2024-07-21 18:24:30.009821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.899 [2024-07-21 18:24:30.009878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.899 [2024-07-21 18:24:30.010019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.899 [2024-07-21 18:24:30.010046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.899 [2024-07-21 18:24:30.010199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.899 [2024-07-21 18:24:30.010230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.899 [2024-07-21 18:24:30.010376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.899 [2024-07-21 18:24:30.010401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.466 NEW_FUNC[1/697]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:09:12.467 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:12.467 #6 NEW cov: 11921 ft: 11916 corp: 2/37b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 4 CrossOver-InsertByte-InsertByte-InsertRepeatedBytes- 00:09:12.467 [2024-07-21 18:24:30.498981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.467 [2024-07-21 18:24:30.499048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.467 #13 NEW cov: 12052 ft: 13236 corp: 3/47b lim: 40 exec/s: 0 rss: 72Mb L: 10/36 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:12.467 [2024-07-21 18:24:30.558983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff07 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.467 [2024-07-21 18:24:30.559018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.467 #14 NEW cov: 12058 ft: 13417 corp: 4/57b lim: 40 exec/s: 0 rss: 72Mb L: 10/36 MS: 1 CMP- DE: "\377\377\377\007"- 00:09:12.467 [2024-07-21 18:24:30.629167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.467 [2024-07-21 18:24:30.629201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.467 #17 NEW cov: 12143 ft: 13759 corp: 5/67b lim: 40 exec/s: 0 rss: 72Mb L: 10/36 MS: 3 EraseBytes-CopyPart-PersAutoDict- DE: "\377\377\377\007"- 00:09:12.467 [2024-07-21 18:24:30.679351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff07 cdw11:fffffffd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.467 [2024-07-21 18:24:30.679386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.725 #18 NEW cov: 12143 ft: 13847 corp: 6/77b lim: 40 exec/s: 0 rss: 73Mb L: 10/36 MS: 1 ChangeBinInt- 00:09:12.725 [2024-07-21 18:24:30.749523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffef07 cdw11:fffffffd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.725 [2024-07-21 18:24:30.749556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.725 #29 NEW cov: 12143 ft: 13907 corp: 7/87b lim: 40 exec/s: 0 rss: 73Mb L: 10/36 MS: 1 ChangeBit- 00:09:12.725 [2024-07-21 18:24:30.820127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.725 [2024-07-21 18:24:30.820161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.725 [2024-07-21 18:24:30.820240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbffff cdw11:ffff07db SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.725 [2024-07-21 18:24:30.820260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.725 [2024-07-21 18:24:30.820330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.725 [2024-07-21 18:24:30.820349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.725 [2024-07-21 18:24:30.820418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.725 [2024-07-21 18:24:30.820438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.725 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:12.725 #30 NEW cov: 12166 ft: 14007 corp: 8/123b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CrossOver- 00:09:12.725 [2024-07-21 18:24:30.889877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:40fffcff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.725 [2024-07-21 18:24:30.889910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.983 #36 NEW cov: 12166 ft: 14012 corp: 9/133b lim: 40 exec/s: 0 rss: 73Mb L: 10/36 MS: 1 ChangeBinInt- 00:09:12.983 [2024-07-21 18:24:30.960080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ef07ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.983 [2024-07-21 18:24:30.960113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.983 #39 NEW cov: 12166 ft: 14030 corp: 10/148b lim: 40 exec/s: 39 rss: 73Mb L: 15/36 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:09:12.983 [2024-07-21 18:24:31.010227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ebffffff cdw11:ff40ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.983 [2024-07-21 18:24:31.010264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.983 #40 NEW cov: 12166 ft: 14071 corp: 11/159b lim: 40 exec/s: 40 rss: 73Mb L: 11/36 MS: 1 InsertByte- 00:09:12.983 [2024-07-21 18:24:31.060374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.983 [2024-07-21 18:24:31.060407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.983 #41 NEW cov: 12166 ft: 14111 corp: 12/169b lim: 40 exec/s: 41 rss: 73Mb L: 10/36 MS: 1 CrossOver- 00:09:12.983 [2024-07-21 18:24:31.110515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff07 cdw11:ffffff07 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.983 [2024-07-21 18:24:31.110550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.983 #42 NEW cov: 12166 ft: 14120 corp: 13/179b lim: 40 exec/s: 42 rss: 73Mb L: 10/36 MS: 1 PersAutoDict- DE: "\377\377\377\007"- 00:09:12.983 [2024-07-21 18:24:31.160642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:e99e6ddb cdw11:92f02b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.983 [2024-07-21 18:24:31.160677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.241 #43 NEW cov: 12166 ft: 14137 corp: 14/189b lim: 40 exec/s: 43 rss: 73Mb L: 10/36 MS: 1 CMP- DE: "\351\236m\333\222\360+\000"- 00:09:13.241 [2024-07-21 18:24:31.230829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ef07ff40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.230862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.241 #44 NEW cov: 12166 ft: 14163 corp: 15/200b lim: 40 exec/s: 44 rss: 73Mb L: 11/36 MS: 1 EraseBytes- 00:09:13.241 [2024-07-21 18:24:31.301020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff07 cdw11:fffffffd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.301054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.241 #45 NEW cov: 12166 ft: 14170 corp: 16/212b lim: 40 exec/s: 45 rss: 73Mb L: 12/36 MS: 1 CopyPart- 00:09:13.241 [2024-07-21 18:24:31.351331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.351365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.241 [2024-07-21 18:24:31.351436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.351456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.241 #46 NEW cov: 12166 ft: 14379 corp: 17/230b lim: 40 exec/s: 46 rss: 73Mb L: 18/36 MS: 1 EraseBytes- 00:09:13.241 [2024-07-21 18:24:31.401500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ef07ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.401534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.241 [2024-07-21 18:24:31.401604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:9ffffdff cdw11:402fffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.401625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.241 #47 NEW cov: 12166 ft: 14404 corp: 18/246b lim: 40 exec/s: 47 rss: 73Mb L: 16/36 MS: 1 InsertByte- 00:09:13.241 [2024-07-21 18:24:31.451876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.451910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.241 [2024-07-21 18:24:31.451982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdb7e cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.452002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.241 [2024-07-21 18:24:31.452072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.452092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.241 [2024-07-21 18:24:31.452161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.241 [2024-07-21 18:24:31.452180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.501 #48 NEW cov: 12166 ft: 14439 corp: 19/282b lim: 40 exec/s: 48 rss: 73Mb L: 36/36 MS: 1 ChangeByte- 00:09:13.501 [2024-07-21 18:24:31.502056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.502091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.502163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.502183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.502260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:7edbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.502280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.502350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.502370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.501 #49 NEW cov: 12166 ft: 14453 corp: 20/318b lim: 40 exec/s: 49 rss: 73Mb L: 36/36 MS: 1 CopyPart- 00:09:13.501 [2024-07-21 18:24:31.571820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ef07ff40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.571853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.501 #50 NEW cov: 12166 ft: 14471 corp: 21/329b lim: 40 exec/s: 50 rss: 73Mb L: 11/36 MS: 1 ShuffleBytes- 00:09:13.501 [2024-07-21 18:24:31.642195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.642236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.642309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.642333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.501 #51 NEW cov: 12166 ft: 14473 corp: 22/349b lim: 40 exec/s: 51 rss: 74Mb L: 20/36 MS: 1 EraseBytes- 00:09:13.501 [2024-07-21 18:24:31.712636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.712669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.712740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbffff cdw11:ffff07db SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.712760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.712833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.712852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.501 [2024-07-21 18:24:31.712922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbd6db SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.501 [2024-07-21 18:24:31.712942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.760 #52 NEW cov: 12166 ft: 14487 corp: 23/385b lim: 40 exec/s: 52 rss: 74Mb L: 36/36 MS: 1 ChangeBinInt- 00:09:13.760 [2024-07-21 18:24:31.782944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.760 [2024-07-21 18:24:31.782977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.760 [2024-07-21 18:24:31.783046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.760 [2024-07-21 18:24:31.783066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.760 [2024-07-21 18:24:31.783135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:7edbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.760 [2024-07-21 18:24:31.783158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.760 [2024-07-21 18:24:31.783233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.760 [2024-07-21 18:24:31.783253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.760 [2024-07-21 18:24:31.783324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:dbdbdbdb cdw11:0a5bbf0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.760 [2024-07-21 18:24:31.783344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.760 #53 NEW cov: 12166 ft: 14569 corp: 24/425b lim: 40 exec/s: 53 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:09:13.760 [2024-07-21 18:24:31.832505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff41ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.760 [2024-07-21 18:24:31.832538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.760 #54 NEW cov: 12166 ft: 14581 corp: 25/436b lim: 40 exec/s: 54 rss: 74Mb L: 11/40 MS: 1 InsertByte- 00:09:13.760 [2024-07-21 18:24:31.882676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.761 [2024-07-21 18:24:31.882709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.761 #55 NEW cov: 12166 ft: 14604 corp: 26/446b lim: 40 exec/s: 55 rss: 74Mb L: 10/40 MS: 1 ChangeBinInt- 00:09:13.761 [2024-07-21 18:24:31.933400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.761 [2024-07-21 18:24:31.933433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.761 [2024-07-21 18:24:31.933506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.761 [2024-07-21 18:24:31.933527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.761 [2024-07-21 18:24:31.933600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.761 [2024-07-21 18:24:31.933619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.761 [2024-07-21 18:24:31.933693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.761 [2024-07-21 18:24:31.933713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.761 [2024-07-21 18:24:31.933784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:dbdbdbdb cdw11:0a5bbf0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:13.761 [2024-07-21 18:24:31.933804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.761 #56 NEW cov: 12166 ft: 14619 corp: 27/486b lim: 40 exec/s: 56 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:14.019 [2024-07-21 18:24:31.982924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fb41ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.020 [2024-07-21 18:24:31.982958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.020 #57 NEW cov: 12166 ft: 14624 corp: 28/497b lim: 40 exec/s: 28 rss: 74Mb L: 11/40 MS: 1 ChangeBit- 00:09:14.020 #57 DONE cov: 12166 ft: 14624 corp: 28/497b lim: 40 exec/s: 28 rss: 74Mb 00:09:14.020 ###### Recommended dictionary. ###### 00:09:14.020 "\377\377\377\007" # Uses: 3 00:09:14.020 "\351\236m\333\222\360+\000" # Uses: 0 00:09:14.020 ###### End of recommended dictionary. ###### 00:09:14.020 Done 57 runs in 2 second(s) 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:14.020 18:24:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:09:14.279 [2024-07-21 18:24:32.240954] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:14.279 [2024-07-21 18:24:32.241045] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822136 ] 00:09:14.279 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.537 [2024-07-21 18:24:32.612702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.537 [2024-07-21 18:24:32.709803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.796 [2024-07-21 18:24:32.774002] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.796 [2024-07-21 18:24:32.790252] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:09:14.796 INFO: Running with entropic power schedule (0xFF, 100). 00:09:14.796 INFO: Seed: 2676716640 00:09:14.796 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:14.796 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:14.796 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:14.796 INFO: A corpus is not provided, starting from an empty corpus 00:09:14.796 #2 INITED exec/s: 0 rss: 65Mb 00:09:14.796 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:14.796 This may also happen if the target rejected all inputs we tried so far 00:09:14.796 [2024-07-21 18:24:32.867986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.796 [2024-07-21 18:24:32.868053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.363 NEW_FUNC[1/699]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:09:15.363 NEW_FUNC[2/699]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:15.363 #13 NEW cov: 11923 ft: 11908 corp: 2/12b lim: 35 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 InsertRepeatedBytes- 00:09:15.363 [2024-07-21 18:24:33.368992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.363 [2024-07-21 18:24:33.369044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.363 #17 NEW cov: 12053 ft: 12341 corp: 3/23b lim: 35 exec/s: 0 rss: 72Mb L: 11/11 MS: 4 CrossOver-CopyPart-CrossOver-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:09:15.363 [2024-07-21 18:24:33.429180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.363 [2024-07-21 18:24:33.429231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.363 #18 NEW cov: 12069 ft: 12627 corp: 4/35b lim: 35 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 InsertByte- 00:09:15.363 [2024-07-21 18:24:33.509411] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.363 [2024-07-21 18:24:33.509454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.363 #20 NEW cov: 12154 ft: 12915 corp: 5/44b lim: 35 exec/s: 0 rss: 73Mb L: 9/12 MS: 2 ChangeBit-PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:15.363 [2024-07-21 18:24:33.569795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.363 [2024-07-21 18:24:33.569835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.620 #27 NEW cov: 12154 ft: 13035 corp: 6/53b lim: 35 exec/s: 0 rss: 73Mb L: 9/12 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:15.621 [2024-07-21 18:24:33.630153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.621 [2024-07-21 18:24:33.630194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.621 #28 NEW cov: 12154 ft: 13170 corp: 7/64b lim: 35 exec/s: 0 rss: 73Mb L: 11/12 MS: 1 CopyPart- 00:09:15.621 [2024-07-21 18:24:33.710444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.621 [2024-07-21 18:24:33.710483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.621 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:15.621 #34 NEW cov: 12177 ft: 13254 corp: 8/73b lim: 35 exec/s: 0 rss: 73Mb L: 9/12 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:15.621 [2024-07-21 18:24:33.770525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.621 [2024-07-21 18:24:33.770562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.621 #35 NEW cov: 12177 ft: 13333 corp: 9/85b lim: 35 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 CopyPart- 00:09:15.878 [2024-07-21 18:24:33.850867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.878 [2024-07-21 18:24:33.850905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.878 #36 NEW cov: 12177 ft: 13362 corp: 10/96b lim: 35 exec/s: 36 rss: 73Mb L: 11/12 MS: 1 ChangeBinInt- 00:09:15.878 [2024-07-21 18:24:33.911376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.878 [2024-07-21 18:24:33.911418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.878 [2024-07-21 18:24:33.911526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.878 [2024-07-21 18:24:33.911551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.878 #37 NEW cov: 12177 ft: 14100 corp: 11/115b lim: 35 exec/s: 37 rss: 73Mb L: 19/19 MS: 1 CMP- DE: "\000+\360\231\037c\316D"- 00:09:15.879 [2024-07-21 18:24:34.001519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.879 [2024-07-21 18:24:34.001561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.879 #38 NEW cov: 12177 ft: 14176 corp: 12/124b lim: 35 exec/s: 38 rss: 73Mb L: 9/19 MS: 1 ChangeBinInt- 00:09:15.879 [2024-07-21 18:24:34.081982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.879 [2024-07-21 18:24:34.082021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.137 #40 NEW cov: 12177 ft: 14214 corp: 13/133b lim: 35 exec/s: 40 rss: 73Mb L: 9/19 MS: 2 CrossOver-PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:16.137 [2024-07-21 18:24:34.143441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.143477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.137 [2024-07-21 18:24:34.143584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:5 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.143609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.137 [2024-07-21 18:24:34.143710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:6 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.143734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.137 [2024-07-21 18:24:34.143844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.143871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.137 #41 NEW cov: 12177 ft: 14615 corp: 14/163b lim: 35 exec/s: 41 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:09:16.137 [2024-07-21 18:24:34.232766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.232805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.137 #42 NEW cov: 12177 ft: 14632 corp: 15/172b lim: 35 exec/s: 42 rss: 73Mb L: 9/30 MS: 1 ShuffleBytes- 00:09:16.137 [2024-07-21 18:24:34.293402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.293437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.137 [2024-07-21 18:24:34.293545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.137 [2024-07-21 18:24:34.293571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.137 #43 NEW cov: 12177 ft: 14665 corp: 16/192b lim: 35 exec/s: 43 rss: 73Mb L: 20/30 MS: 1 PersAutoDict- DE: "\000+\360\231\037c\316D"- 00:09:16.395 [2024-07-21 18:24:34.363498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.395 [2024-07-21 18:24:34.363539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.395 #44 NEW cov: 12177 ft: 14684 corp: 17/201b lim: 35 exec/s: 44 rss: 73Mb L: 9/30 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:16.395 [2024-07-21 18:24:34.424232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.395 [2024-07-21 18:24:34.424270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.395 [2024-07-21 18:24:34.424381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.395 [2024-07-21 18:24:34.424407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.395 #45 NEW cov: 12177 ft: 14699 corp: 18/221b lim: 35 exec/s: 45 rss: 73Mb L: 20/30 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:16.395 [2024-07-21 18:24:34.504128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.395 [2024-07-21 18:24:34.504168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.395 #46 NEW cov: 12177 ft: 14744 corp: 19/230b lim: 35 exec/s: 46 rss: 73Mb L: 9/30 MS: 1 ShuffleBytes- 00:09:16.395 [2024-07-21 18:24:34.584625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.395 [2024-07-21 18:24:34.584665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.654 #47 NEW cov: 12177 ft: 14753 corp: 20/239b lim: 35 exec/s: 47 rss: 74Mb L: 9/30 MS: 1 CrossOver- 00:09:16.654 [2024-07-21 18:24:34.664954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.654 [2024-07-21 18:24:34.664991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.654 #48 NEW cov: 12177 ft: 14845 corp: 21/248b lim: 35 exec/s: 48 rss: 74Mb L: 9/30 MS: 1 ShuffleBytes- 00:09:16.654 [2024-07-21 18:24:34.745399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.654 [2024-07-21 18:24:34.745440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.654 #49 NEW cov: 12177 ft: 14852 corp: 22/257b lim: 35 exec/s: 49 rss: 74Mb L: 9/30 MS: 1 EraseBytes- 00:09:16.654 [2024-07-21 18:24:34.806824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.654 [2024-07-21 18:24:34.806859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.654 [2024-07-21 18:24:34.806962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:5 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.654 [2024-07-21 18:24:34.806985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.654 [2024-07-21 18:24:34.807091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:6 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.654 [2024-07-21 18:24:34.807113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.654 [2024-07-21 18:24:34.807216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.654 [2024-07-21 18:24:34.807247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.654 #50 NEW cov: 12177 ft: 14863 corp: 23/287b lim: 35 exec/s: 25 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:09:16.654 #50 DONE cov: 12177 ft: 14863 corp: 23/287b lim: 35 exec/s: 25 rss: 74Mb 00:09:16.654 ###### Recommended dictionary. ###### 00:09:16.654 "\377\377\377\377\377\377\377\377" # Uses: 5 00:09:16.654 "\000+\360\231\037c\316D" # Uses: 1 00:09:16.654 ###### End of recommended dictionary. ###### 00:09:16.654 Done 50 runs in 2 second(s) 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:16.913 18:24:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:09:16.913 [2024-07-21 18:24:35.051040] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:16.913 [2024-07-21 18:24:35.051119] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822492 ] 00:09:16.913 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.171 [2024-07-21 18:24:35.292374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.171 [2024-07-21 18:24:35.382015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.429 [2024-07-21 18:24:35.446265] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.429 [2024-07-21 18:24:35.462507] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:09:17.429 INFO: Running with entropic power schedule (0xFF, 100). 00:09:17.429 INFO: Seed: 1054758314 00:09:17.429 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:17.429 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:17.429 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:17.429 INFO: A corpus is not provided, starting from an empty corpus 00:09:17.429 #2 INITED exec/s: 0 rss: 65Mb 00:09:17.429 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:17.429 This may also happen if the target rejected all inputs we tried so far 00:09:17.996 NEW_FUNC[1/686]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:09:17.996 NEW_FUNC[2/686]: 0x4b8790 in feat_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:332 00:09:17.996 #7 NEW cov: 11838 ft: 11837 corp: 2/27b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 5 ChangeByte-ChangeBit-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:09:17.996 [2024-07-21 18:24:35.999422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.996 [2024-07-21 18:24:35.999479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.996 NEW_FUNC[1/14]: 0x179f950 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:09:17.996 NEW_FUNC[2/14]: 0x179fb90 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:09:17.996 #18 NEW cov: 12097 ft: 12830 corp: 3/34b lim: 35 exec/s: 0 rss: 73Mb L: 7/26 MS: 1 InsertRepeatedBytes- 00:09:17.996 [2024-07-21 18:24:36.060161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.996 [2024-07-21 18:24:36.060197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.996 [2024-07-21 18:24:36.060282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.996 [2024-07-21 18:24:36.060302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.996 [2024-07-21 18:24:36.060375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.996 [2024-07-21 18:24:36.060394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.996 #19 NEW cov: 12103 ft: 13425 corp: 4/65b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:09:17.996 #20 NEW cov: 12188 ft: 13662 corp: 5/91b lim: 35 exec/s: 0 rss: 73Mb L: 26/31 MS: 1 ChangeByte- 00:09:18.254 #21 NEW cov: 12188 ft: 13855 corp: 6/118b lim: 35 exec/s: 0 rss: 73Mb L: 27/31 MS: 1 InsertByte- 00:09:18.254 #22 NEW cov: 12188 ft: 13881 corp: 7/144b lim: 35 exec/s: 0 rss: 73Mb L: 26/31 MS: 1 CopyPart- 00:09:18.254 #23 NEW cov: 12188 ft: 14478 corp: 8/175b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:09:18.254 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:18.254 #24 NEW cov: 12211 ft: 14554 corp: 9/201b lim: 35 exec/s: 0 rss: 73Mb L: 26/31 MS: 1 ChangeBinInt- 00:09:18.254 [2024-07-21 18:24:36.420376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.254 [2024-07-21 18:24:36.420412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.512 #25 NEW cov: 12211 ft: 14592 corp: 10/214b lim: 35 exec/s: 0 rss: 74Mb L: 13/31 MS: 1 InsertRepeatedBytes- 00:09:18.512 [2024-07-21 18:24:36.491037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.512 [2024-07-21 18:24:36.491071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.512 #26 NEW cov: 12211 ft: 14677 corp: 11/242b lim: 35 exec/s: 26 rss: 74Mb L: 28/31 MS: 1 InsertByte- 00:09:18.512 [2024-07-21 18:24:36.541123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.512 [2024-07-21 18:24:36.541158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.512 #27 NEW cov: 12211 ft: 14741 corp: 12/268b lim: 35 exec/s: 27 rss: 74Mb L: 26/31 MS: 1 ChangeBinInt- 00:09:18.512 [2024-07-21 18:24:36.591292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.512 [2024-07-21 18:24:36.591327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.512 #28 NEW cov: 12211 ft: 14802 corp: 13/294b lim: 35 exec/s: 28 rss: 74Mb L: 26/31 MS: 1 ChangeBinInt- 00:09:18.512 [2024-07-21 18:24:36.661042] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000029 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.512 [2024-07-21 18:24:36.661076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.512 #33 NEW cov: 12211 ft: 14876 corp: 14/306b lim: 35 exec/s: 33 rss: 74Mb L: 12/31 MS: 5 CrossOver-CopyPart-CrossOver-ChangeBit-CrossOver- 00:09:18.512 [2024-07-21 18:24:36.711174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.513 [2024-07-21 18:24:36.711209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.771 #34 NEW cov: 12211 ft: 14915 corp: 15/319b lim: 35 exec/s: 34 rss: 74Mb L: 13/31 MS: 1 ShuffleBytes- 00:09:18.771 [2024-07-21 18:24:36.781537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.771 [2024-07-21 18:24:36.781571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.771 [2024-07-21 18:24:36.781649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.771 [2024-07-21 18:24:36.781669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.771 #38 NEW cov: 12211 ft: 15109 corp: 16/335b lim: 35 exec/s: 38 rss: 74Mb L: 16/31 MS: 4 ShuffleBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:09:18.771 #39 NEW cov: 12211 ft: 15153 corp: 17/362b lim: 35 exec/s: 39 rss: 74Mb L: 27/31 MS: 1 InsertByte- 00:09:18.771 [2024-07-21 18:24:36.902168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.771 [2024-07-21 18:24:36.902204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.771 #40 NEW cov: 12211 ft: 15165 corp: 18/388b lim: 35 exec/s: 40 rss: 74Mb L: 26/31 MS: 1 CrossOver- 00:09:18.771 [2024-07-21 18:24:36.952021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.771 [2024-07-21 18:24:36.952056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.771 [2024-07-21 18:24:36.952132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.771 [2024-07-21 18:24:36.952151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.771 #41 NEW cov: 12211 ft: 15191 corp: 19/404b lim: 35 exec/s: 41 rss: 74Mb L: 16/31 MS: 1 InsertRepeatedBytes- 00:09:19.030 [2024-07-21 18:24:37.002164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.030 [2024-07-21 18:24:37.002198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.030 [2024-07-21 18:24:37.002281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.030 [2024-07-21 18:24:37.002301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.030 #42 NEW cov: 12211 ft: 15217 corp: 20/420b lim: 35 exec/s: 42 rss: 74Mb L: 16/31 MS: 1 ChangeBit- 00:09:19.030 [2024-07-21 18:24:37.072733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.030 [2024-07-21 18:24:37.072767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.030 [2024-07-21 18:24:37.072844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.030 [2024-07-21 18:24:37.072864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.030 [2024-07-21 18:24:37.072934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.030 [2024-07-21 18:24:37.072953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.030 #43 NEW cov: 12211 ft: 15239 corp: 21/452b lim: 35 exec/s: 43 rss: 74Mb L: 32/32 MS: 1 InsertByte- 00:09:19.030 [2024-07-21 18:24:37.142419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005bc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.030 [2024-07-21 18:24:37.142452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.030 #44 NEW cov: 12211 ft: 15274 corp: 22/461b lim: 35 exec/s: 44 rss: 74Mb L: 9/32 MS: 1 EraseBytes- 00:09:19.030 #45 NEW cov: 12211 ft: 15282 corp: 23/487b lim: 35 exec/s: 45 rss: 74Mb L: 26/32 MS: 1 ShuffleBytes- 00:09:19.288 [2024-07-21 18:24:37.263370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.289 [2024-07-21 18:24:37.263405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.289 #46 NEW cov: 12211 ft: 15289 corp: 24/516b lim: 35 exec/s: 46 rss: 74Mb L: 29/32 MS: 1 CrossOver- 00:09:19.289 [2024-07-21 18:24:37.313492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.289 [2024-07-21 18:24:37.313526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.289 [2024-07-21 18:24:37.313603] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.289 [2024-07-21 18:24:37.313623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.289 [2024-07-21 18:24:37.313700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.289 [2024-07-21 18:24:37.313719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.289 #47 NEW cov: 12211 ft: 15294 corp: 25/548b lim: 35 exec/s: 47 rss: 74Mb L: 32/32 MS: 1 InsertByte- 00:09:19.289 [2024-07-21 18:24:37.363508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.289 [2024-07-21 18:24:37.363541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.289 #48 NEW cov: 12211 ft: 15310 corp: 26/575b lim: 35 exec/s: 48 rss: 74Mb L: 27/32 MS: 1 InsertByte- 00:09:19.289 #49 NEW cov: 12211 ft: 15323 corp: 27/601b lim: 35 exec/s: 49 rss: 74Mb L: 26/32 MS: 1 ChangeBit- 00:09:19.289 [2024-07-21 18:24:37.484017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000049f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.289 [2024-07-21 18:24:37.484053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.548 #50 NEW cov: 12211 ft: 15331 corp: 28/630b lim: 35 exec/s: 25 rss: 74Mb L: 29/32 MS: 1 InsertRepeatedBytes- 00:09:19.548 #50 DONE cov: 12211 ft: 15331 corp: 28/630b lim: 35 exec/s: 25 rss: 74Mb 00:09:19.548 Done 50 runs in 2 second(s) 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:19.548 18:24:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:09:19.548 [2024-07-21 18:24:37.720471] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:19.548 [2024-07-21 18:24:37.720545] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822851 ] 00:09:19.806 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.806 [2024-07-21 18:24:37.957236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.064 [2024-07-21 18:24:38.045430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.064 [2024-07-21 18:24:38.109626] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.064 [2024-07-21 18:24:38.125876] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:09:20.064 INFO: Running with entropic power schedule (0xFF, 100). 00:09:20.064 INFO: Seed: 3717762847 00:09:20.064 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:20.064 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:20.064 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:20.064 INFO: A corpus is not provided, starting from an empty corpus 00:09:20.064 #2 INITED exec/s: 0 rss: 65Mb 00:09:20.064 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:20.064 This may also happen if the target rejected all inputs we tried so far 00:09:20.064 [2024-07-21 18:24:38.181420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389850981841209 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.064 [2024-07-21 18:24:38.181463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.064 [2024-07-21 18:24:38.181524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.064 [2024-07-21 18:24:38.181546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.630 NEW_FUNC[1/698]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:09:20.630 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:20.630 #3 NEW cov: 12007 ft: 11981 corp: 2/54b lim: 105 exec/s: 0 rss: 72Mb L: 53/53 MS: 1 InsertRepeatedBytes- 00:09:20.630 [2024-07-21 18:24:38.662604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.662660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.662731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.662755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.630 #7 NEW cov: 12138 ft: 12453 corp: 3/108b lim: 105 exec/s: 0 rss: 72Mb L: 54/54 MS: 4 ChangeByte-ChangeBit-CrossOver-CrossOver- 00:09:20.630 [2024-07-21 18:24:38.712893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.712934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.712993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.713014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.713078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.713099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.713166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.713188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.630 #8 NEW cov: 12144 ft: 13217 corp: 4/206b lim: 105 exec/s: 0 rss: 72Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:09:20.630 [2024-07-21 18:24:38.763026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.763063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.763125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.763146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.763208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.763237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.763303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.763325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.630 #9 NEW cov: 12229 ft: 13631 corp: 5/305b lim: 105 exec/s: 0 rss: 73Mb L: 99/99 MS: 1 CrossOver- 00:09:20.630 [2024-07-21 18:24:38.832944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.832979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.630 [2024-07-21 18:24:38.833025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.630 [2024-07-21 18:24:38.833048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.889 #10 NEW cov: 12229 ft: 13798 corp: 6/359b lim: 105 exec/s: 0 rss: 73Mb L: 54/99 MS: 1 CopyPart- 00:09:20.889 [2024-07-21 18:24:38.903442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.903477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:38.903545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.903567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:38.903629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.903649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:38.903714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.903735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.889 #11 NEW cov: 12229 ft: 13917 corp: 7/459b lim: 105 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 InsertByte- 00:09:20.889 [2024-07-21 18:24:38.973656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.973693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:38.973756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.973778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:38.973843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4123338174723864889 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.973865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:38.973927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:38.973948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.889 #17 NEW cov: 12229 ft: 13986 corp: 8/555b lim: 105 exec/s: 0 rss: 73Mb L: 96/100 MS: 1 CopyPart- 00:09:20.889 [2024-07-21 18:24:39.043538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:39.043580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:39.043633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:39.043655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.889 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:20.889 #18 NEW cov: 12252 ft: 14030 corp: 9/603b lim: 105 exec/s: 0 rss: 73Mb L: 48/100 MS: 1 EraseBytes- 00:09:20.889 [2024-07-21 18:24:39.093973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:39.094009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:39.094070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:39.094092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:39.094159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:39.094181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.889 [2024-07-21 18:24:39.094249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.889 [2024-07-21 18:24:39.094270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.148 #19 NEW cov: 12252 ft: 14053 corp: 10/705b lim: 105 exec/s: 0 rss: 73Mb L: 102/102 MS: 1 CopyPart- 00:09:21.148 [2024-07-21 18:24:39.163755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389850981841209 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.163790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.148 #20 NEW cov: 12252 ft: 14511 corp: 11/742b lim: 105 exec/s: 20 rss: 73Mb L: 37/102 MS: 1 EraseBytes- 00:09:21.148 [2024-07-21 18:24:39.233930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389850981841209 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.233966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.148 #21 NEW cov: 12252 ft: 14529 corp: 12/779b lim: 105 exec/s: 21 rss: 73Mb L: 37/102 MS: 1 ChangeASCIIInt- 00:09:21.148 [2024-07-21 18:24:39.304510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.304546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.148 [2024-07-21 18:24:39.304612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.304635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.148 [2024-07-21 18:24:39.304700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.304728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.148 [2024-07-21 18:24:39.304794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.304817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.148 #22 NEW cov: 12252 ft: 14582 corp: 13/881b lim: 105 exec/s: 22 rss: 73Mb L: 102/102 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:21.148 [2024-07-21 18:24:39.354720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.354757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.148 [2024-07-21 18:24:39.354816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.354838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.148 [2024-07-21 18:24:39.354901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.354923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.148 [2024-07-21 18:24:39.354987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.148 [2024-07-21 18:24:39.355009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.409 #23 NEW cov: 12252 ft: 14611 corp: 14/984b lim: 105 exec/s: 23 rss: 74Mb L: 103/103 MS: 1 CopyPart- 00:09:21.409 [2024-07-21 18:24:39.424602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.424638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.409 [2024-07-21 18:24:39.424684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.424706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.409 #24 NEW cov: 12252 ft: 14621 corp: 15/1033b lim: 105 exec/s: 24 rss: 74Mb L: 49/103 MS: 1 EraseBytes- 00:09:21.409 [2024-07-21 18:24:39.475005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.475043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.409 [2024-07-21 18:24:39.475100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.475122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.409 [2024-07-21 18:24:39.475186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.475208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.409 [2024-07-21 18:24:39.475281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.475306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.409 #25 NEW cov: 12252 ft: 14646 corp: 16/1136b lim: 105 exec/s: 25 rss: 74Mb L: 103/103 MS: 1 ChangeByte- 00:09:21.409 [2024-07-21 18:24:39.544959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.544996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.409 [2024-07-21 18:24:39.545044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770367545 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.409 [2024-07-21 18:24:39.545066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.409 #26 NEW cov: 12252 ft: 14665 corp: 17/1190b lim: 105 exec/s: 26 rss: 74Mb L: 54/103 MS: 1 ChangeByte- 00:09:21.410 [2024-07-21 18:24:39.595402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.410 [2024-07-21 18:24:39.595439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.410 [2024-07-21 18:24:39.595497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.410 [2024-07-21 18:24:39.595519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.410 [2024-07-21 18:24:39.595582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.410 [2024-07-21 18:24:39.595602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.410 [2024-07-21 18:24:39.595667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.410 [2024-07-21 18:24:39.595688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.735 #27 NEW cov: 12252 ft: 14688 corp: 18/1292b lim: 105 exec/s: 27 rss: 74Mb L: 102/103 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:09:21.735 [2024-07-21 18:24:39.665455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.665493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.665553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.665574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.665641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.665663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.735 #28 NEW cov: 12252 ft: 14981 corp: 19/1374b lim: 105 exec/s: 28 rss: 74Mb L: 82/103 MS: 1 EraseBytes- 00:09:21.735 [2024-07-21 18:24:39.735750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.735786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.735848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.735870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.735934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.735955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.736017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.736040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.735 #29 NEW cov: 12252 ft: 14996 corp: 20/1477b lim: 105 exec/s: 29 rss: 74Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:09:21.735 [2024-07-21 18:24:39.785585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.785620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.785668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389851770370361 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.785690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.735 #30 NEW cov: 12252 ft: 15007 corp: 21/1531b lim: 105 exec/s: 30 rss: 74Mb L: 54/103 MS: 1 ChangeByte- 00:09:21.735 [2024-07-21 18:24:39.836020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.836056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.836122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.836144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.836206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.836231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.735 [2024-07-21 18:24:39.836297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.836318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.735 #31 NEW cov: 12252 ft: 15076 corp: 22/1634b lim: 105 exec/s: 31 rss: 74Mb L: 103/103 MS: 1 CopyPart- 00:09:21.735 [2024-07-21 18:24:39.885727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389850981841209 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.735 [2024-07-21 18:24:39.885762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.735 #32 NEW cov: 12252 ft: 15084 corp: 23/1671b lim: 105 exec/s: 32 rss: 74Mb L: 37/103 MS: 1 ChangeASCIIInt- 00:09:21.994 [2024-07-21 18:24:39.956109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389851767290169 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:39.956149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.994 [2024-07-21 18:24:39.956216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4123389854162303289 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:39.956238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.994 #33 NEW cov: 12252 ft: 15159 corp: 24/1725b lim: 105 exec/s: 33 rss: 74Mb L: 54/103 MS: 1 ChangeBinInt- 00:09:21.994 [2024-07-21 18:24:40.026164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4123389850981841209 len:14650 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.026202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.994 #34 NEW cov: 12252 ft: 15201 corp: 25/1762b lim: 105 exec/s: 34 rss: 74Mb L: 37/103 MS: 1 ChangeASCIIInt- 00:09:21.994 [2024-07-21 18:24:40.086792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.086844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.994 [2024-07-21 18:24:40.086934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.086968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.994 [2024-07-21 18:24:40.087056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.087088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.994 [2024-07-21 18:24:40.087175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.087207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.994 #35 NEW cov: 12252 ft: 15387 corp: 26/1864b lim: 105 exec/s: 35 rss: 74Mb L: 102/103 MS: 1 ChangeByte- 00:09:21.994 [2024-07-21 18:24:40.146780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.146819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.994 [2024-07-21 18:24:40.146871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.146893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.994 [2024-07-21 18:24:40.146959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.994 [2024-07-21 18:24:40.146983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.994 #36 NEW cov: 12252 ft: 15395 corp: 27/1940b lim: 105 exec/s: 18 rss: 74Mb L: 76/103 MS: 1 InsertRepeatedBytes- 00:09:21.994 #36 DONE cov: 12252 ft: 15395 corp: 27/1940b lim: 105 exec/s: 18 rss: 74Mb 00:09:21.994 ###### Recommended dictionary. ###### 00:09:21.994 "\000\000\000\000" # Uses: 1 00:09:21.994 ###### End of recommended dictionary. ###### 00:09:21.994 Done 36 runs in 2 second(s) 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:22.253 18:24:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:09:22.253 [2024-07-21 18:24:40.381885] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:22.253 [2024-07-21 18:24:40.381963] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3823209 ] 00:09:22.253 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.512 [2024-07-21 18:24:40.627563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.512 [2024-07-21 18:24:40.715677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.770 [2024-07-21 18:24:40.779827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.770 [2024-07-21 18:24:40.796061] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:09:22.770 INFO: Running with entropic power schedule (0xFF, 100). 00:09:22.770 INFO: Seed: 2092784464 00:09:22.770 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:22.770 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:22.770 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:22.770 INFO: A corpus is not provided, starting from an empty corpus 00:09:22.770 #2 INITED exec/s: 0 rss: 65Mb 00:09:22.770 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:22.770 This may also happen if the target rejected all inputs we tried so far 00:09:22.770 [2024-07-21 18:24:40.872979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.770 [2024-07-21 18:24:40.873029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.338 NEW_FUNC[1/699]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:09:23.338 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:23.338 #8 NEW cov: 12025 ft: 12022 corp: 2/48b lim: 120 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:09:23.338 [2024-07-21 18:24:41.374344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.338 [2024-07-21 18:24:41.374398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.338 #14 NEW cov: 12159 ft: 12512 corp: 3/95b lim: 120 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 ChangeByte- 00:09:23.338 [2024-07-21 18:24:41.465829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.338 [2024-07-21 18:24:41.465868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.338 [2024-07-21 18:24:41.465937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.338 [2024-07-21 18:24:41.465960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.338 [2024-07-21 18:24:41.466023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.338 [2024-07-21 18:24:41.466047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.338 [2024-07-21 18:24:41.466142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.338 [2024-07-21 18:24:41.466165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.338 #15 NEW cov: 12165 ft: 13651 corp: 4/193b lim: 120 exec/s: 0 rss: 73Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:09:23.597 [2024-07-21 18:24:41.556069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417613 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.556113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.556177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.556203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.556284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.556308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.556401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.556427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.597 #16 NEW cov: 12250 ft: 13852 corp: 5/292b lim: 120 exec/s: 0 rss: 73Mb L: 99/99 MS: 1 InsertByte- 00:09:23.597 [2024-07-21 18:24:41.646518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417613 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.646557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.646633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.646659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.646742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.646766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.646866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12659529147151789999 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.646893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.597 #17 NEW cov: 12250 ft: 13967 corp: 6/391b lim: 120 exec/s: 0 rss: 73Mb L: 99/99 MS: 1 ChangeBit- 00:09:23.597 [2024-07-21 18:24:41.737017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417613 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.737055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.737128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.737155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.737217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.737243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.597 [2024-07-21 18:24:41.737344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12659529147151789999 len:44807 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.597 [2024-07-21 18:24:41.737368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.597 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:23.597 #18 NEW cov: 12273 ft: 14037 corp: 7/498b lim: 120 exec/s: 0 rss: 73Mb L: 107/107 MS: 1 CMP- DE: "\006\000\000\000\000\000\000\000"- 00:09:23.857 [2024-07-21 18:24:41.826303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:41.826343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.857 #19 NEW cov: 12273 ft: 14162 corp: 8/545b lim: 120 exec/s: 19 rss: 73Mb L: 47/107 MS: 1 ShuffleBytes- 00:09:23.857 [2024-07-21 18:24:41.887735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417613 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:41.887774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.857 [2024-07-21 18:24:41.887851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:41.887875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.857 [2024-07-21 18:24:41.887946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:41.887975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.857 [2024-07-21 18:24:41.888084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:49450189288501291 len:44807 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:41.888112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.857 #20 NEW cov: 12273 ft: 14183 corp: 9/652b lim: 120 exec/s: 20 rss: 73Mb L: 107/107 MS: 1 CMP- DE: "\275\0219f\235\360+\000"- 00:09:23.857 [2024-07-21 18:24:41.977052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:41.977092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.857 #21 NEW cov: 12273 ft: 14283 corp: 10/699b lim: 120 exec/s: 21 rss: 74Mb L: 47/107 MS: 1 ChangeBit- 00:09:23.857 [2024-07-21 18:24:42.047586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.857 [2024-07-21 18:24:42.047625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.116 #22 NEW cov: 12273 ft: 14383 corp: 11/746b lim: 120 exec/s: 22 rss: 74Mb L: 47/107 MS: 1 ChangeBinInt- 00:09:24.116 [2024-07-21 18:24:42.128007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.128049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.116 #23 NEW cov: 12273 ft: 14401 corp: 12/793b lim: 120 exec/s: 23 rss: 74Mb L: 47/107 MS: 1 ChangeByte- 00:09:24.116 [2024-07-21 18:24:42.189389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.189428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.189505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.189530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.189584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.189608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.189705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446655769181945855 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.189732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.116 #24 NEW cov: 12273 ft: 14455 corp: 13/904b lim: 120 exec/s: 24 rss: 74Mb L: 111/111 MS: 1 InsertRepeatedBytes- 00:09:24.116 [2024-07-21 18:24:42.249604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417613 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.249641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.249720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.249748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.249803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.249826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.249930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12659529147151789999 len:44807 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.249958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.116 #25 NEW cov: 12273 ft: 14477 corp: 14/1017b lim: 120 exec/s: 25 rss: 74Mb L: 113/113 MS: 1 CrossOver- 00:09:24.116 [2024-07-21 18:24:42.309202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.309244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.116 [2024-07-21 18:24:42.309335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12659529495044140975 len:176 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.116 [2024-07-21 18:24:42.309356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.374 #26 NEW cov: 12273 ft: 14827 corp: 15/1068b lim: 120 exec/s: 26 rss: 74Mb L: 51/113 MS: 1 InsertRepeatedBytes- 00:09:24.374 [2024-07-21 18:24:42.369517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.374 [2024-07-21 18:24:42.369556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.374 [2024-07-21 18:24:42.369658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.374 [2024-07-21 18:24:42.369683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.374 #27 NEW cov: 12273 ft: 14887 corp: 16/1123b lim: 120 exec/s: 27 rss: 74Mb L: 55/113 MS: 1 EraseBytes- 00:09:24.374 [2024-07-21 18:24:42.439588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.374 [2024-07-21 18:24:42.439625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.375 #28 NEW cov: 12273 ft: 14896 corp: 17/1170b lim: 120 exec/s: 28 rss: 74Mb L: 47/113 MS: 1 ChangeBit- 00:09:24.375 [2024-07-21 18:24:42.500027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.375 [2024-07-21 18:24:42.500064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.375 #29 NEW cov: 12273 ft: 14916 corp: 18/1212b lim: 120 exec/s: 29 rss: 74Mb L: 42/113 MS: 1 EraseBytes- 00:09:24.375 [2024-07-21 18:24:42.560369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.375 [2024-07-21 18:24:42.560408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.633 #30 NEW cov: 12273 ft: 14966 corp: 19/1259b lim: 120 exec/s: 30 rss: 74Mb L: 47/113 MS: 1 ChangeBinInt- 00:09:24.633 [2024-07-21 18:24:42.620763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.633 [2024-07-21 18:24:42.620807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.633 #31 NEW cov: 12273 ft: 15013 corp: 20/1301b lim: 120 exec/s: 31 rss: 74Mb L: 42/113 MS: 1 CopyPart- 00:09:24.633 [2024-07-21 18:24:42.701201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.633 [2024-07-21 18:24:42.701245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.633 #32 NEW cov: 12273 ft: 15024 corp: 21/1343b lim: 120 exec/s: 32 rss: 74Mb L: 42/113 MS: 1 ChangeByte- 00:09:24.633 [2024-07-21 18:24:42.782785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530246663417775 len:43440 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.633 [2024-07-21 18:24:42.782824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.633 [2024-07-21 18:24:42.782897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.633 [2024-07-21 18:24:42.782921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.633 [2024-07-21 18:24:42.782990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.633 [2024-07-21 18:24:42.783017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.633 [2024-07-21 18:24:42.783118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446655769181945855 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.633 [2024-07-21 18:24:42.783143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.633 #33 NEW cov: 12273 ft: 15058 corp: 22/1454b lim: 120 exec/s: 16 rss: 74Mb L: 111/113 MS: 1 ChangeBinInt- 00:09:24.633 #33 DONE cov: 12273 ft: 15058 corp: 22/1454b lim: 120 exec/s: 16 rss: 74Mb 00:09:24.633 ###### Recommended dictionary. ###### 00:09:24.633 "\006\000\000\000\000\000\000\000" # Uses: 0 00:09:24.633 "\275\0219f\235\360+\000" # Uses: 0 00:09:24.633 ###### End of recommended dictionary. ###### 00:09:24.633 Done 33 runs in 2 second(s) 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:24.891 18:24:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:09:24.891 18:24:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:24.891 18:24:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:24.891 18:24:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:24.891 18:24:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:09:24.891 [2024-07-21 18:24:43.035841] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:24.891 [2024-07-21 18:24:43.035918] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3823562 ] 00:09:24.891 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.148 [2024-07-21 18:24:43.292323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.407 [2024-07-21 18:24:43.383626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.407 [2024-07-21 18:24:43.448104] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.407 [2024-07-21 18:24:43.464347] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:09:25.407 INFO: Running with entropic power schedule (0xFF, 100). 00:09:25.407 INFO: Seed: 463826679 00:09:25.407 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:25.407 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:25.407 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:25.407 INFO: A corpus is not provided, starting from an empty corpus 00:09:25.407 #2 INITED exec/s: 0 rss: 65Mb 00:09:25.407 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:25.407 This may also happen if the target rejected all inputs we tried so far 00:09:25.407 [2024-07-21 18:24:43.535733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:25.407 [2024-07-21 18:24:43.535786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.407 [2024-07-21 18:24:43.535903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:25.407 [2024-07-21 18:24:43.535932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.407 [2024-07-21 18:24:43.536044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:25.407 [2024-07-21 18:24:43.536074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.407 [2024-07-21 18:24:43.536187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:25.407 [2024-07-21 18:24:43.536219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.974 NEW_FUNC[1/695]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:09:25.974 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:25.974 #4 NEW cov: 11944 ft: 11966 corp: 2/86b lim: 100 exec/s: 0 rss: 72Mb L: 85/85 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:25.974 [2024-07-21 18:24:44.036187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:25.974 [2024-07-21 18:24:44.036244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.974 [2024-07-21 18:24:44.036338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:25.974 [2024-07-21 18:24:44.036362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.974 [2024-07-21 18:24:44.036465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:25.974 [2024-07-21 18:24:44.036488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.974 NEW_FUNC[1/2]: 0x1d9c290 in spdk_thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1165 00:09:25.974 NEW_FUNC[2/2]: 0x1d9ca70 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1065 00:09:25.974 #7 NEW cov: 12102 ft: 12833 corp: 3/159b lim: 100 exec/s: 0 rss: 72Mb L: 73/85 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:09:25.974 [2024-07-21 18:24:44.106168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:25.974 [2024-07-21 18:24:44.106205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.974 [2024-07-21 18:24:44.106279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:25.974 [2024-07-21 18:24:44.106303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.974 #8 NEW cov: 12108 ft: 13193 corp: 4/208b lim: 100 exec/s: 0 rss: 72Mb L: 49/85 MS: 1 InsertRepeatedBytes- 00:09:25.974 [2024-07-21 18:24:44.167092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:25.974 [2024-07-21 18:24:44.167128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.974 [2024-07-21 18:24:44.167201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:25.974 [2024-07-21 18:24:44.167228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.974 [2024-07-21 18:24:44.167284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:25.974 [2024-07-21 18:24:44.167306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.974 [2024-07-21 18:24:44.167396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:25.974 [2024-07-21 18:24:44.167421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.232 #9 NEW cov: 12193 ft: 13483 corp: 5/293b lim: 100 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 ChangeBinInt- 00:09:26.232 [2024-07-21 18:24:44.247497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.232 [2024-07-21 18:24:44.247532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.233 [2024-07-21 18:24:44.247604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.233 [2024-07-21 18:24:44.247625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.233 [2024-07-21 18:24:44.247684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.233 [2024-07-21 18:24:44.247706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.233 [2024-07-21 18:24:44.247802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:26.233 [2024-07-21 18:24:44.247827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.233 #10 NEW cov: 12193 ft: 13692 corp: 6/379b lim: 100 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 CopyPart- 00:09:26.233 [2024-07-21 18:24:44.307564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.233 [2024-07-21 18:24:44.307599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.233 [2024-07-21 18:24:44.307678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.233 [2024-07-21 18:24:44.307701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.233 #11 NEW cov: 12193 ft: 13802 corp: 7/428b lim: 100 exec/s: 0 rss: 73Mb L: 49/86 MS: 1 ChangeBit- 00:09:26.233 [2024-07-21 18:24:44.387979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.233 [2024-07-21 18:24:44.388016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.233 [2024-07-21 18:24:44.388119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.233 [2024-07-21 18:24:44.388146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.233 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:26.233 #17 NEW cov: 12216 ft: 13877 corp: 8/477b lim: 100 exec/s: 0 rss: 73Mb L: 49/86 MS: 1 ShuffleBytes- 00:09:26.491 [2024-07-21 18:24:44.448150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.491 [2024-07-21 18:24:44.448187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.491 [2024-07-21 18:24:44.448286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.491 [2024-07-21 18:24:44.448306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.491 #23 NEW cov: 12216 ft: 13902 corp: 9/527b lim: 100 exec/s: 0 rss: 73Mb L: 50/86 MS: 1 InsertByte- 00:09:26.491 [2024-07-21 18:24:44.508579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.491 [2024-07-21 18:24:44.508615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.491 [2024-07-21 18:24:44.508711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.491 [2024-07-21 18:24:44.508736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.491 #24 NEW cov: 12216 ft: 13978 corp: 10/576b lim: 100 exec/s: 24 rss: 73Mb L: 49/86 MS: 1 ChangeBinInt- 00:09:26.491 [2024-07-21 18:24:44.569250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.491 [2024-07-21 18:24:44.569286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.491 [2024-07-21 18:24:44.569355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.491 [2024-07-21 18:24:44.569377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.491 [2024-07-21 18:24:44.569436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.491 [2024-07-21 18:24:44.569457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.491 [2024-07-21 18:24:44.569559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:26.491 [2024-07-21 18:24:44.569580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.491 #30 NEW cov: 12216 ft: 14020 corp: 11/661b lim: 100 exec/s: 30 rss: 73Mb L: 85/86 MS: 1 ChangeBit- 00:09:26.491 [2024-07-21 18:24:44.649274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.491 [2024-07-21 18:24:44.649311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.491 [2024-07-21 18:24:44.649370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.491 [2024-07-21 18:24:44.649395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.491 #31 NEW cov: 12216 ft: 14123 corp: 12/710b lim: 100 exec/s: 31 rss: 73Mb L: 49/86 MS: 1 ChangeBinInt- 00:09:26.749 [2024-07-21 18:24:44.729569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.749 [2024-07-21 18:24:44.729606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.729702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.749 [2024-07-21 18:24:44.729724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.749 #32 NEW cov: 12216 ft: 14125 corp: 13/759b lim: 100 exec/s: 32 rss: 73Mb L: 49/86 MS: 1 ChangeBit- 00:09:26.749 [2024-07-21 18:24:44.810587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.749 [2024-07-21 18:24:44.810623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.810697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.749 [2024-07-21 18:24:44.810720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.810764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.749 [2024-07-21 18:24:44.810785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.810882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:26.749 [2024-07-21 18:24:44.810906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.749 #33 NEW cov: 12216 ft: 14210 corp: 14/844b lim: 100 exec/s: 33 rss: 73Mb L: 85/86 MS: 1 ChangeBit- 00:09:26.749 [2024-07-21 18:24:44.871424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.749 [2024-07-21 18:24:44.871460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.871538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.749 [2024-07-21 18:24:44.871559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.871620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.749 [2024-07-21 18:24:44.871641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.749 [2024-07-21 18:24:44.871736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:26.749 [2024-07-21 18:24:44.871760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.750 [2024-07-21 18:24:44.871858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:26.750 [2024-07-21 18:24:44.871880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:26.750 #34 NEW cov: 12216 ft: 14256 corp: 15/944b lim: 100 exec/s: 34 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:26.750 [2024-07-21 18:24:44.930719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.750 [2024-07-21 18:24:44.930755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.750 [2024-07-21 18:24:44.930857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.750 [2024-07-21 18:24:44.930880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.750 #35 NEW cov: 12216 ft: 14265 corp: 16/993b lim: 100 exec/s: 35 rss: 73Mb L: 49/100 MS: 1 ChangeBit- 00:09:27.008 [2024-07-21 18:24:44.991195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.008 [2024-07-21 18:24:44.991238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.008 [2024-07-21 18:24:44.991326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.008 [2024-07-21 18:24:44.991346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.008 #36 NEW cov: 12216 ft: 14321 corp: 17/1042b lim: 100 exec/s: 36 rss: 73Mb L: 49/100 MS: 1 ChangeBit- 00:09:27.008 [2024-07-21 18:24:45.071628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.008 [2024-07-21 18:24:45.071663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.008 [2024-07-21 18:24:45.071766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.008 [2024-07-21 18:24:45.071788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.008 #37 NEW cov: 12216 ft: 14354 corp: 18/1092b lim: 100 exec/s: 37 rss: 73Mb L: 50/100 MS: 1 ChangeByte- 00:09:27.008 [2024-07-21 18:24:45.151856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.008 [2024-07-21 18:24:45.151892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.008 [2024-07-21 18:24:45.151977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.008 [2024-07-21 18:24:45.152001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.008 #39 NEW cov: 12216 ft: 14388 corp: 19/1149b lim: 100 exec/s: 39 rss: 73Mb L: 57/100 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:27.008 [2024-07-21 18:24:45.211997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.008 [2024-07-21 18:24:45.212034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.008 [2024-07-21 18:24:45.212122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.008 [2024-07-21 18:24:45.212142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.266 #40 NEW cov: 12216 ft: 14419 corp: 20/1198b lim: 100 exec/s: 40 rss: 73Mb L: 49/100 MS: 1 ChangeByte- 00:09:27.266 [2024-07-21 18:24:45.292667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.266 [2024-07-21 18:24:45.292703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.266 [2024-07-21 18:24:45.292802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.266 [2024-07-21 18:24:45.292823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.266 #41 NEW cov: 12216 ft: 14433 corp: 21/1249b lim: 100 exec/s: 41 rss: 73Mb L: 51/100 MS: 1 InsertByte- 00:09:27.266 [2024-07-21 18:24:45.353104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.266 [2024-07-21 18:24:45.353142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.266 [2024-07-21 18:24:45.353209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.266 [2024-07-21 18:24:45.353237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.266 [2024-07-21 18:24:45.353309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.266 [2024-07-21 18:24:45.353331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.266 #42 NEW cov: 12216 ft: 14451 corp: 22/1320b lim: 100 exec/s: 42 rss: 73Mb L: 71/100 MS: 1 InsertRepeatedBytes- 00:09:27.266 [2024-07-21 18:24:45.413268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.266 [2024-07-21 18:24:45.413304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.266 [2024-07-21 18:24:45.413366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.266 [2024-07-21 18:24:45.413389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.266 [2024-07-21 18:24:45.413450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.266 [2024-07-21 18:24:45.413470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.266 #43 NEW cov: 12216 ft: 14512 corp: 23/1380b lim: 100 exec/s: 43 rss: 74Mb L: 60/100 MS: 1 InsertRepeatedBytes- 00:09:27.526 [2024-07-21 18:24:45.493741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.526 [2024-07-21 18:24:45.493778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.526 [2024-07-21 18:24:45.493847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.526 [2024-07-21 18:24:45.493868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.526 [2024-07-21 18:24:45.493913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.526 [2024-07-21 18:24:45.493939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.526 [2024-07-21 18:24:45.494042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.526 [2024-07-21 18:24:45.494063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.526 #44 NEW cov: 12216 ft: 14516 corp: 24/1477b lim: 100 exec/s: 22 rss: 74Mb L: 97/100 MS: 1 CrossOver- 00:09:27.526 #44 DONE cov: 12216 ft: 14516 corp: 24/1477b lim: 100 exec/s: 22 rss: 74Mb 00:09:27.526 Done 44 runs in 2 second(s) 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:27.526 18:24:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:09:27.785 [2024-07-21 18:24:45.742346] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:27.785 [2024-07-21 18:24:45.742423] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3823924 ] 00:09:27.785 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.044 [2024-07-21 18:24:46.001973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.044 [2024-07-21 18:24:46.094031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.044 [2024-07-21 18:24:46.158221] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.044 [2024-07-21 18:24:46.174453] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:28.044 INFO: Running with entropic power schedule (0xFF, 100). 00:09:28.044 INFO: Seed: 3175828077 00:09:28.044 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:28.044 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:28.044 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:28.044 INFO: A corpus is not provided, starting from an empty corpus 00:09:28.044 #2 INITED exec/s: 0 rss: 65Mb 00:09:28.044 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:28.044 This may also happen if the target rejected all inputs we tried so far 00:09:28.044 [2024-07-21 18:24:46.239872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:990773248 len:1 00:09:28.044 [2024-07-21 18:24:46.239917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.561 NEW_FUNC[1/697]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:28.561 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:28.561 #12 NEW cov: 11950 ft: 11951 corp: 2/14b lim: 50 exec/s: 0 rss: 72Mb L: 13/13 MS: 5 ShuffleBytes-ChangeBit-CopyPart-ChangeByte-InsertRepeatedBytes- 00:09:28.561 [2024-07-21 18:24:46.721304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:973996032 len:1 00:09:28.561 [2024-07-21 18:24:46.721386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.820 #13 NEW cov: 12080 ft: 12680 corp: 3/27b lim: 50 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeByte- 00:09:28.820 [2024-07-21 18:24:46.801494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1004666880 len:1 00:09:28.820 [2024-07-21 18:24:46.801533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.820 [2024-07-21 18:24:46.801580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:28.820 [2024-07-21 18:24:46.801603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.820 [2024-07-21 18:24:46.801671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:28.820 [2024-07-21 18:24:46.801693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.820 #16 NEW cov: 12086 ft: 13159 corp: 4/63b lim: 50 exec/s: 0 rss: 72Mb L: 36/36 MS: 3 EraseBytes-ChangeByte-InsertRepeatedBytes- 00:09:28.820 [2024-07-21 18:24:46.851583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1004666880 len:1 00:09:28.820 [2024-07-21 18:24:46.851624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.820 [2024-07-21 18:24:46.851668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:28.820 [2024-07-21 18:24:46.851691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.820 [2024-07-21 18:24:46.851756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:28.820 [2024-07-21 18:24:46.851777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.820 #17 NEW cov: 12171 ft: 13394 corp: 5/99b lim: 50 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CopyPart- 00:09:28.820 [2024-07-21 18:24:46.921542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:973078528 len:1 00:09:28.820 [2024-07-21 18:24:46.921581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.820 #18 NEW cov: 12171 ft: 13498 corp: 6/112b lim: 50 exec/s: 0 rss: 73Mb L: 13/36 MS: 1 CopyPart- 00:09:28.820 [2024-07-21 18:24:46.991888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14829735431805717965 len:52686 00:09:28.820 [2024-07-21 18:24:46.991925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.820 [2024-07-21 18:24:46.991971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14829735431805717965 len:52686 00:09:28.820 [2024-07-21 18:24:46.991994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.820 #19 NEW cov: 12171 ft: 13854 corp: 7/141b lim: 50 exec/s: 0 rss: 73Mb L: 29/36 MS: 1 InsertRepeatedBytes- 00:09:29.078 [2024-07-21 18:24:47.041876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3940649673964032 len:1 00:09:29.078 [2024-07-21 18:24:47.041913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.078 #20 NEW cov: 12171 ft: 13966 corp: 8/154b lim: 50 exec/s: 0 rss: 73Mb L: 13/36 MS: 1 ShuffleBytes- 00:09:29.078 [2024-07-21 18:24:47.092048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3941538732194304 len:1 00:09:29.078 [2024-07-21 18:24:47.092086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.078 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:29.078 #21 NEW cov: 12194 ft: 14034 corp: 9/167b lim: 50 exec/s: 0 rss: 73Mb L: 13/36 MS: 1 ChangeByte- 00:09:29.078 [2024-07-21 18:24:47.162507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1004666880 len:1 00:09:29.078 [2024-07-21 18:24:47.162543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.078 [2024-07-21 18:24:47.162594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:29.078 [2024-07-21 18:24:47.162615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.078 [2024-07-21 18:24:47.162681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:31 00:09:29.078 [2024-07-21 18:24:47.162702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.078 #22 NEW cov: 12194 ft: 14078 corp: 10/203b lim: 50 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeByte- 00:09:29.078 [2024-07-21 18:24:47.232442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3940649673964036 len:1 00:09:29.078 [2024-07-21 18:24:47.232479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.078 #23 NEW cov: 12194 ft: 14114 corp: 11/216b lim: 50 exec/s: 23 rss: 73Mb L: 13/36 MS: 1 ChangeBit- 00:09:29.078 [2024-07-21 18:24:47.282629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18442803419756807675 len:65281 00:09:29.078 [2024-07-21 18:24:47.282666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.337 #24 NEW cov: 12194 ft: 14138 corp: 12/229b lim: 50 exec/s: 24 rss: 73Mb L: 13/36 MS: 1 ChangeBinInt- 00:09:29.337 [2024-07-21 18:24:47.352803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:151297851392 len:1 00:09:29.337 [2024-07-21 18:24:47.352841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.337 #25 NEW cov: 12194 ft: 14167 corp: 13/242b lim: 50 exec/s: 25 rss: 73Mb L: 13/36 MS: 1 ChangeByte- 00:09:29.337 [2024-07-21 18:24:47.403092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1004666880 len:1 00:09:29.337 [2024-07-21 18:24:47.403131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.337 [2024-07-21 18:24:47.403192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:29.337 [2024-07-21 18:24:47.403220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.337 #26 NEW cov: 12194 ft: 14211 corp: 14/271b lim: 50 exec/s: 26 rss: 73Mb L: 29/36 MS: 1 EraseBytes- 00:09:29.337 [2024-07-21 18:24:47.473130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:990773390 len:1 00:09:29.337 [2024-07-21 18:24:47.473170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.337 #27 NEW cov: 12194 ft: 14251 corp: 15/285b lim: 50 exec/s: 27 rss: 73Mb L: 14/36 MS: 1 InsertByte- 00:09:29.337 [2024-07-21 18:24:47.523276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1004666880 len:1 00:09:29.337 [2024-07-21 18:24:47.523315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.596 #28 NEW cov: 12194 ft: 14305 corp: 16/300b lim: 50 exec/s: 28 rss: 73Mb L: 15/36 MS: 1 EraseBytes- 00:09:29.596 [2024-07-21 18:24:47.593606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1004666880 len:1 00:09:29.596 [2024-07-21 18:24:47.593644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.596 [2024-07-21 18:24:47.593689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:29.596 [2024-07-21 18:24:47.593712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.596 #29 NEW cov: 12194 ft: 14315 corp: 17/329b lim: 50 exec/s: 29 rss: 73Mb L: 29/36 MS: 1 CopyPart- 00:09:29.596 [2024-07-21 18:24:47.643612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18442802324540147195 len:256 00:09:29.596 [2024-07-21 18:24:47.643651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.596 #30 NEW cov: 12194 ft: 14334 corp: 18/342b lim: 50 exec/s: 30 rss: 74Mb L: 13/36 MS: 1 ShuffleBytes- 00:09:29.596 [2024-07-21 18:24:47.713867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15393162803803 len:1 00:09:29.596 [2024-07-21 18:24:47.713905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.596 #31 NEW cov: 12194 ft: 14403 corp: 19/356b lim: 50 exec/s: 31 rss: 74Mb L: 14/36 MS: 1 InsertByte- 00:09:29.596 [2024-07-21 18:24:47.763968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3941538732194304 len:1 00:09:29.596 [2024-07-21 18:24:47.764007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.855 #32 NEW cov: 12194 ft: 14426 corp: 20/367b lim: 50 exec/s: 32 rss: 74Mb L: 11/36 MS: 1 EraseBytes- 00:09:29.855 [2024-07-21 18:24:47.834204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:973078528 len:1 00:09:29.855 [2024-07-21 18:24:47.834249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.855 #33 NEW cov: 12194 ft: 14452 corp: 21/380b lim: 50 exec/s: 33 rss: 74Mb L: 13/36 MS: 1 ChangeByte- 00:09:29.855 [2024-07-21 18:24:47.904353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:973078528 len:1 00:09:29.855 [2024-07-21 18:24:47.904391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.855 #34 NEW cov: 12194 ft: 14515 corp: 22/393b lim: 50 exec/s: 34 rss: 74Mb L: 13/36 MS: 1 CrossOver- 00:09:29.855 [2024-07-21 18:24:47.974595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3940649673964032 len:2049 00:09:29.856 [2024-07-21 18:24:47.974632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.856 #35 NEW cov: 12194 ft: 14537 corp: 23/406b lim: 50 exec/s: 35 rss: 74Mb L: 13/36 MS: 1 ChangeBinInt- 00:09:29.856 [2024-07-21 18:24:48.024923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14829735429943446989 len:52686 00:09:29.856 [2024-07-21 18:24:48.024958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.856 [2024-07-21 18:24:48.025009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14829735431805717965 len:52686 00:09:29.856 [2024-07-21 18:24:48.025031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.856 [2024-07-21 18:24:48.025096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14829735431805717965 len:52491 00:09:29.856 [2024-07-21 18:24:48.025118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.115 #36 NEW cov: 12194 ft: 14547 corp: 24/436b lim: 50 exec/s: 36 rss: 74Mb L: 30/36 MS: 1 InsertByte- 00:09:30.115 [2024-07-21 18:24:48.094959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18442802324540147195 len:256 00:09:30.115 [2024-07-21 18:24:48.094997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.115 #37 NEW cov: 12194 ft: 14570 corp: 25/449b lim: 50 exec/s: 37 rss: 74Mb L: 13/36 MS: 1 ChangeByte- 00:09:30.115 [2024-07-21 18:24:48.165077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:990773248 len:65022 00:09:30.115 [2024-07-21 18:24:48.165114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.115 #38 NEW cov: 12194 ft: 14591 corp: 26/465b lim: 50 exec/s: 38 rss: 74Mb L: 16/36 MS: 1 InsertRepeatedBytes- 00:09:30.115 [2024-07-21 18:24:48.215217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:292171914883906048 len:1 00:09:30.115 [2024-07-21 18:24:48.215252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.115 #39 NEW cov: 12194 ft: 14596 corp: 27/476b lim: 50 exec/s: 19 rss: 74Mb L: 11/36 MS: 1 ChangeBit- 00:09:30.115 #39 DONE cov: 12194 ft: 14596 corp: 27/476b lim: 50 exec/s: 19 rss: 74Mb 00:09:30.115 Done 39 runs in 2 second(s) 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:30.374 18:24:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:09:30.374 [2024-07-21 18:24:48.470835] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:30.374 [2024-07-21 18:24:48.470931] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824277 ] 00:09:30.374 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.631 [2024-07-21 18:24:48.726655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.631 [2024-07-21 18:24:48.815352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.890 [2024-07-21 18:24:48.879648] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.890 [2024-07-21 18:24:48.895883] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:30.890 INFO: Running with entropic power schedule (0xFF, 100). 00:09:30.890 INFO: Seed: 1600867837 00:09:30.890 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:30.890 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:30.890 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:30.890 INFO: A corpus is not provided, starting from an empty corpus 00:09:30.890 #2 INITED exec/s: 0 rss: 65Mb 00:09:30.890 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:30.890 This may also happen if the target rejected all inputs we tried so far 00:09:30.890 [2024-07-21 18:24:48.966495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:30.890 [2024-07-21 18:24:48.966549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.458 NEW_FUNC[1/699]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:31.458 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:31.458 #4 NEW cov: 12008 ft: 12009 corp: 2/36b lim: 90 exec/s: 0 rss: 72Mb L: 35/35 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:31.458 [2024-07-21 18:24:49.447817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.458 [2024-07-21 18:24:49.447862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.458 [2024-07-21 18:24:49.447957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.458 [2024-07-21 18:24:49.447978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.458 #10 NEW cov: 12138 ft: 13234 corp: 3/72b lim: 90 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CrossOver- 00:09:31.458 [2024-07-21 18:24:49.517636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.458 [2024-07-21 18:24:49.517668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.458 #11 NEW cov: 12144 ft: 13591 corp: 4/107b lim: 90 exec/s: 0 rss: 73Mb L: 35/36 MS: 1 ChangeBit- 00:09:31.458 [2024-07-21 18:24:49.568914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.458 [2024-07-21 18:24:49.568948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.458 [2024-07-21 18:24:49.569037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.458 [2024-07-21 18:24:49.569057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.458 [2024-07-21 18:24:49.569142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:31.458 [2024-07-21 18:24:49.569159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.458 [2024-07-21 18:24:49.569254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:31.458 [2024-07-21 18:24:49.569275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.458 #12 NEW cov: 12229 ft: 14251 corp: 5/187b lim: 90 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:09:31.458 [2024-07-21 18:24:49.638421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.458 [2024-07-21 18:24:49.638450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.458 [2024-07-21 18:24:49.638521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.458 [2024-07-21 18:24:49.638542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.458 #18 NEW cov: 12229 ft: 14324 corp: 6/231b lim: 90 exec/s: 0 rss: 73Mb L: 44/80 MS: 1 CopyPart- 00:09:31.717 [2024-07-21 18:24:49.688622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.717 [2024-07-21 18:24:49.688652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.688716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.717 [2024-07-21 18:24:49.688736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.717 #19 NEW cov: 12229 ft: 14442 corp: 7/275b lim: 90 exec/s: 0 rss: 73Mb L: 44/80 MS: 1 ShuffleBytes- 00:09:31.717 [2024-07-21 18:24:49.749612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.717 [2024-07-21 18:24:49.749641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.749747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.717 [2024-07-21 18:24:49.749768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.749865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:31.717 [2024-07-21 18:24:49.749880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.749980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:31.717 [2024-07-21 18:24:49.750000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.717 #20 NEW cov: 12229 ft: 14495 corp: 8/363b lim: 90 exec/s: 0 rss: 73Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:09:31.717 [2024-07-21 18:24:49.819119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.717 [2024-07-21 18:24:49.819148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.819233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.717 [2024-07-21 18:24:49.819252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.717 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:31.717 #24 NEW cov: 12252 ft: 14567 corp: 9/413b lim: 90 exec/s: 0 rss: 73Mb L: 50/88 MS: 4 ChangeBit-CopyPart-CMP-InsertRepeatedBytes- DE: "\001\000\000\000,\262E\373"- 00:09:31.717 [2024-07-21 18:24:49.879354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.717 [2024-07-21 18:24:49.879386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.879461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.717 [2024-07-21 18:24:49.879483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.717 #25 NEW cov: 12252 ft: 14601 corp: 10/449b lim: 90 exec/s: 0 rss: 73Mb L: 36/88 MS: 1 InsertByte- 00:09:31.717 [2024-07-21 18:24:49.929617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.717 [2024-07-21 18:24:49.929650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.717 [2024-07-21 18:24:49.929720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.717 [2024-07-21 18:24:49.929740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.977 #31 NEW cov: 12252 ft: 14630 corp: 11/485b lim: 90 exec/s: 31 rss: 73Mb L: 36/88 MS: 1 ChangeByte- 00:09:31.977 [2024-07-21 18:24:49.989953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.977 [2024-07-21 18:24:49.989983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.977 [2024-07-21 18:24:49.990063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.977 [2024-07-21 18:24:49.990083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.977 #32 NEW cov: 12252 ft: 14648 corp: 12/521b lim: 90 exec/s: 32 rss: 73Mb L: 36/88 MS: 1 ChangeByte- 00:09:31.977 [2024-07-21 18:24:50.061382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.977 [2024-07-21 18:24:50.061420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.977 [2024-07-21 18:24:50.061493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.977 [2024-07-21 18:24:50.061512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.977 [2024-07-21 18:24:50.061600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:31.977 [2024-07-21 18:24:50.061621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.977 [2024-07-21 18:24:50.061714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:31.977 [2024-07-21 18:24:50.061733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.977 #33 NEW cov: 12252 ft: 14688 corp: 13/608b lim: 90 exec/s: 33 rss: 73Mb L: 87/88 MS: 1 InsertRepeatedBytes- 00:09:31.977 [2024-07-21 18:24:50.130957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:31.977 [2024-07-21 18:24:50.130992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.977 [2024-07-21 18:24:50.131074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:31.977 [2024-07-21 18:24:50.131092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.977 [2024-07-21 18:24:50.131190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:31.977 [2024-07-21 18:24:50.131208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.977 #34 NEW cov: 12252 ft: 15001 corp: 14/665b lim: 90 exec/s: 34 rss: 74Mb L: 57/88 MS: 1 CrossOver- 00:09:32.237 [2024-07-21 18:24:50.200756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.237 [2024-07-21 18:24:50.200789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.237 [2024-07-21 18:24:50.200846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.237 [2024-07-21 18:24:50.200865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.237 #35 NEW cov: 12252 ft: 15009 corp: 15/701b lim: 90 exec/s: 35 rss: 74Mb L: 36/88 MS: 1 PersAutoDict- DE: "\001\000\000\000,\262E\373"- 00:09:32.237 [2024-07-21 18:24:50.271065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.237 [2024-07-21 18:24:50.271095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.237 [2024-07-21 18:24:50.271171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.237 [2024-07-21 18:24:50.271191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.237 #36 NEW cov: 12252 ft: 15030 corp: 16/751b lim: 90 exec/s: 36 rss: 74Mb L: 50/88 MS: 1 CrossOver- 00:09:32.237 [2024-07-21 18:24:50.341518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.237 [2024-07-21 18:24:50.341552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.237 [2024-07-21 18:24:50.341625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.237 [2024-07-21 18:24:50.341645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.237 #37 NEW cov: 12252 ft: 15057 corp: 17/801b lim: 90 exec/s: 37 rss: 74Mb L: 50/88 MS: 1 ChangeBinInt- 00:09:32.237 [2024-07-21 18:24:50.391781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.237 [2024-07-21 18:24:50.391811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.237 [2024-07-21 18:24:50.391889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.237 [2024-07-21 18:24:50.391910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.237 #38 NEW cov: 12252 ft: 15097 corp: 18/838b lim: 90 exec/s: 38 rss: 74Mb L: 37/88 MS: 1 InsertByte- 00:09:32.237 [2024-07-21 18:24:50.442054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.237 [2024-07-21 18:24:50.442084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.237 [2024-07-21 18:24:50.442166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.237 [2024-07-21 18:24:50.442186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.497 #39 NEW cov: 12252 ft: 15130 corp: 19/888b lim: 90 exec/s: 39 rss: 74Mb L: 50/88 MS: 1 ShuffleBytes- 00:09:32.497 [2024-07-21 18:24:50.492950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.497 [2024-07-21 18:24:50.492979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.493085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.497 [2024-07-21 18:24:50.493105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.493194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.497 [2024-07-21 18:24:50.493215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.493302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:32.497 [2024-07-21 18:24:50.493323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:32.497 #40 NEW cov: 12252 ft: 15139 corp: 20/976b lim: 90 exec/s: 40 rss: 74Mb L: 88/88 MS: 1 CopyPart- 00:09:32.497 [2024-07-21 18:24:50.563240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.497 [2024-07-21 18:24:50.563271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.563371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.497 [2024-07-21 18:24:50.563392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.563487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.497 [2024-07-21 18:24:50.563510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.563607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:32.497 [2024-07-21 18:24:50.563627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:32.497 #41 NEW cov: 12252 ft: 15147 corp: 21/1064b lim: 90 exec/s: 41 rss: 74Mb L: 88/88 MS: 1 CopyPart- 00:09:32.497 [2024-07-21 18:24:50.632793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.497 [2024-07-21 18:24:50.632825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.632901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.497 [2024-07-21 18:24:50.632918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.497 #42 NEW cov: 12252 ft: 15172 corp: 22/1109b lim: 90 exec/s: 42 rss: 74Mb L: 45/88 MS: 1 EraseBytes- 00:09:32.497 [2024-07-21 18:24:50.692983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.497 [2024-07-21 18:24:50.693012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.497 [2024-07-21 18:24:50.693094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.497 [2024-07-21 18:24:50.693117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.756 #43 NEW cov: 12252 ft: 15178 corp: 23/1146b lim: 90 exec/s: 43 rss: 74Mb L: 37/88 MS: 1 CrossOver- 00:09:32.756 [2024-07-21 18:24:50.753245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.756 [2024-07-21 18:24:50.753275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.753358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.756 [2024-07-21 18:24:50.753378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.756 #44 NEW cov: 12252 ft: 15184 corp: 24/1196b lim: 90 exec/s: 44 rss: 74Mb L: 50/88 MS: 1 PersAutoDict- DE: "\001\000\000\000,\262E\373"- 00:09:32.756 [2024-07-21 18:24:50.804295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.756 [2024-07-21 18:24:50.804329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.804427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.756 [2024-07-21 18:24:50.804447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.804543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.756 [2024-07-21 18:24:50.804562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.804652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:32.756 [2024-07-21 18:24:50.804670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:32.756 #45 NEW cov: 12252 ft: 15198 corp: 25/1282b lim: 90 exec/s: 45 rss: 74Mb L: 86/88 MS: 1 CopyPart- 00:09:32.756 [2024-07-21 18:24:50.853795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.756 [2024-07-21 18:24:50.853824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.853895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.756 [2024-07-21 18:24:50.853916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.756 #46 NEW cov: 12252 ft: 15241 corp: 26/1318b lim: 90 exec/s: 46 rss: 74Mb L: 36/88 MS: 1 ChangeBinInt- 00:09:32.756 [2024-07-21 18:24:50.903993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.756 [2024-07-21 18:24:50.904023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.904096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.756 [2024-07-21 18:24:50.904113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.756 #52 NEW cov: 12252 ft: 15248 corp: 27/1368b lim: 90 exec/s: 52 rss: 74Mb L: 50/88 MS: 1 ShuffleBytes- 00:09:32.756 [2024-07-21 18:24:50.954249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.756 [2024-07-21 18:24:50.954279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.756 [2024-07-21 18:24:50.954362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.756 [2024-07-21 18:24:50.954381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.015 #53 NEW cov: 12252 ft: 15272 corp: 28/1405b lim: 90 exec/s: 26 rss: 74Mb L: 37/88 MS: 1 ChangeBinInt- 00:09:33.015 #53 DONE cov: 12252 ft: 15272 corp: 28/1405b lim: 90 exec/s: 26 rss: 74Mb 00:09:33.015 ###### Recommended dictionary. ###### 00:09:33.015 "\001\000\000\000,\262E\373" # Uses: 2 00:09:33.015 ###### End of recommended dictionary. ###### 00:09:33.015 Done 53 runs in 2 second(s) 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:33.015 18:24:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:09:33.015 [2024-07-21 18:24:51.186905] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:33.015 [2024-07-21 18:24:51.186980] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824640 ] 00:09:33.274 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.274 [2024-07-21 18:24:51.441409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.532 [2024-07-21 18:24:51.530241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.532 [2024-07-21 18:24:51.594457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.532 [2024-07-21 18:24:51.610697] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:33.532 INFO: Running with entropic power schedule (0xFF, 100). 00:09:33.532 INFO: Seed: 22907335 00:09:33.532 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:33.532 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:33.532 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:33.532 INFO: A corpus is not provided, starting from an empty corpus 00:09:33.532 #2 INITED exec/s: 0 rss: 65Mb 00:09:33.532 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:33.532 This may also happen if the target rejected all inputs we tried so far 00:09:33.532 [2024-07-21 18:24:51.688107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:33.532 [2024-07-21 18:24:51.688158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.532 [2024-07-21 18:24:51.688266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:33.532 [2024-07-21 18:24:51.688294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.096 NEW_FUNC[1/699]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:34.096 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:34.096 #4 NEW cov: 11983 ft: 11983 corp: 2/25b lim: 50 exec/s: 0 rss: 72Mb L: 24/24 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:34.096 [2024-07-21 18:24:52.189495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.096 [2024-07-21 18:24:52.189554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.096 [2024-07-21 18:24:52.189641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.096 [2024-07-21 18:24:52.189665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.096 #5 NEW cov: 12113 ft: 12613 corp: 3/49b lim: 50 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeBit- 00:09:34.096 [2024-07-21 18:24:52.279633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.096 [2024-07-21 18:24:52.279673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.096 [2024-07-21 18:24:52.279778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.096 [2024-07-21 18:24:52.279803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.353 #6 NEW cov: 12119 ft: 12777 corp: 4/73b lim: 50 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ShuffleBytes- 00:09:34.353 [2024-07-21 18:24:52.340179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.353 [2024-07-21 18:24:52.340226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.353 [2024-07-21 18:24:52.340297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.353 [2024-07-21 18:24:52.340323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.353 [2024-07-21 18:24:52.340408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:34.353 [2024-07-21 18:24:52.340434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.353 #7 NEW cov: 12204 ft: 13476 corp: 5/104b lim: 50 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 CopyPart- 00:09:34.353 [2024-07-21 18:24:52.429667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.353 [2024-07-21 18:24:52.429705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.353 #13 NEW cov: 12204 ft: 14309 corp: 6/120b lim: 50 exec/s: 0 rss: 72Mb L: 16/31 MS: 1 EraseBytes- 00:09:34.353 [2024-07-21 18:24:52.521091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.353 [2024-07-21 18:24:52.521130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.353 [2024-07-21 18:24:52.521205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.353 [2024-07-21 18:24:52.521236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.353 [2024-07-21 18:24:52.521341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:34.353 [2024-07-21 18:24:52.521370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.353 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:34.353 #14 NEW cov: 12227 ft: 14495 corp: 7/151b lim: 50 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:09:34.611 [2024-07-21 18:24:52.590946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.611 [2024-07-21 18:24:52.590992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.611 [2024-07-21 18:24:52.591080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.611 [2024-07-21 18:24:52.591106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.611 #15 NEW cov: 12227 ft: 14537 corp: 8/175b lim: 50 exec/s: 0 rss: 73Mb L: 24/31 MS: 1 ShuffleBytes- 00:09:34.611 [2024-07-21 18:24:52.671101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.611 [2024-07-21 18:24:52.671140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.611 [2024-07-21 18:24:52.671219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.611 [2024-07-21 18:24:52.671243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.611 #16 NEW cov: 12227 ft: 14611 corp: 9/199b lim: 50 exec/s: 16 rss: 73Mb L: 24/31 MS: 1 ChangeByte- 00:09:34.611 [2024-07-21 18:24:52.751443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.611 [2024-07-21 18:24:52.751482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.611 [2024-07-21 18:24:52.751589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.611 [2024-07-21 18:24:52.751611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.611 #17 NEW cov: 12227 ft: 14638 corp: 10/227b lim: 50 exec/s: 17 rss: 73Mb L: 28/31 MS: 1 InsertRepeatedBytes- 00:09:34.611 [2024-07-21 18:24:52.812300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.611 [2024-07-21 18:24:52.812338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.611 [2024-07-21 18:24:52.812406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.611 [2024-07-21 18:24:52.812435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.611 [2024-07-21 18:24:52.812491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:34.611 [2024-07-21 18:24:52.812515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.611 [2024-07-21 18:24:52.812626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:34.611 [2024-07-21 18:24:52.812652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.870 #18 NEW cov: 12227 ft: 14984 corp: 11/268b lim: 50 exec/s: 18 rss: 73Mb L: 41/41 MS: 1 CrossOver- 00:09:34.870 [2024-07-21 18:24:52.872835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.870 [2024-07-21 18:24:52.872875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:52.872949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.870 [2024-07-21 18:24:52.872977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:52.873027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:34.870 [2024-07-21 18:24:52.873051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:52.873161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:34.870 [2024-07-21 18:24:52.873188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:52.873299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:34.870 [2024-07-21 18:24:52.873327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:34.870 #19 NEW cov: 12227 ft: 15039 corp: 12/318b lim: 50 exec/s: 19 rss: 73Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:09:34.870 [2024-07-21 18:24:52.942135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.870 [2024-07-21 18:24:52.942177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:52.942284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.870 [2024-07-21 18:24:52.942310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.870 #20 NEW cov: 12227 ft: 15075 corp: 13/340b lim: 50 exec/s: 20 rss: 73Mb L: 22/50 MS: 1 CrossOver- 00:09:34.870 [2024-07-21 18:24:53.023073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.870 [2024-07-21 18:24:53.023113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:53.023196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:34.870 [2024-07-21 18:24:53.023226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:53.023281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:34.870 [2024-07-21 18:24:53.023307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.870 [2024-07-21 18:24:53.023414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:34.870 [2024-07-21 18:24:53.023442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.870 #21 NEW cov: 12227 ft: 15122 corp: 14/382b lim: 50 exec/s: 21 rss: 73Mb L: 42/50 MS: 1 CrossOver- 00:09:35.128 [2024-07-21 18:24:53.112611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.128 [2024-07-21 18:24:53.112655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.112765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.128 [2024-07-21 18:24:53.112791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.128 #22 NEW cov: 12227 ft: 15149 corp: 15/406b lim: 50 exec/s: 22 rss: 73Mb L: 24/50 MS: 1 CMP- DE: "\001\000\002\000"- 00:09:35.128 [2024-07-21 18:24:53.173655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.128 [2024-07-21 18:24:53.173698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.173763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.128 [2024-07-21 18:24:53.173790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.173888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.128 [2024-07-21 18:24:53.173918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.174026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:35.128 [2024-07-21 18:24:53.174052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.128 #27 NEW cov: 12227 ft: 15178 corp: 16/447b lim: 50 exec/s: 27 rss: 73Mb L: 41/50 MS: 5 ShuffleBytes-CopyPart-ChangeByte-CopyPart-InsertRepeatedBytes- 00:09:35.128 [2024-07-21 18:24:53.243507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.128 [2024-07-21 18:24:53.243548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.243622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.128 [2024-07-21 18:24:53.243647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.243725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.128 [2024-07-21 18:24:53.243750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.128 #28 NEW cov: 12227 ft: 15182 corp: 17/479b lim: 50 exec/s: 28 rss: 74Mb L: 32/50 MS: 1 InsertByte- 00:09:35.128 [2024-07-21 18:24:53.324182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.128 [2024-07-21 18:24:53.324244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.324371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.128 [2024-07-21 18:24:53.324409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.324544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.128 [2024-07-21 18:24:53.324582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.128 [2024-07-21 18:24:53.324734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:35.128 [2024-07-21 18:24:53.324772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.386 #29 NEW cov: 12227 ft: 15193 corp: 18/528b lim: 50 exec/s: 29 rss: 74Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:09:35.386 [2024-07-21 18:24:53.414306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.386 [2024-07-21 18:24:53.414346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.414419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.386 [2024-07-21 18:24:53.414446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.414514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.386 [2024-07-21 18:24:53.414541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.414641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:35.386 [2024-07-21 18:24:53.414664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.474756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.386 [2024-07-21 18:24:53.474795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.474879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.386 [2024-07-21 18:24:53.474903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.474983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.386 [2024-07-21 18:24:53.475007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.475113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:35.386 [2024-07-21 18:24:53.475140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.386 #31 NEW cov: 12227 ft: 15231 corp: 19/575b lim: 50 exec/s: 31 rss: 74Mb L: 47/50 MS: 2 InsertRepeatedBytes-ChangeByte- 00:09:35.386 [2024-07-21 18:24:53.534229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.386 [2024-07-21 18:24:53.534269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.386 [2024-07-21 18:24:53.534373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.386 [2024-07-21 18:24:53.534399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.386 #32 NEW cov: 12227 ft: 15245 corp: 20/597b lim: 50 exec/s: 32 rss: 74Mb L: 22/50 MS: 1 CrossOver- 00:09:35.643 [2024-07-21 18:24:53.614198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.643 [2024-07-21 18:24:53.614243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.643 #42 NEW cov: 12227 ft: 15288 corp: 21/609b lim: 50 exec/s: 42 rss: 74Mb L: 12/50 MS: 5 ChangeBinInt-ChangeBit-CMP-ChangeByte-CopyPart- DE: "\000+\360\236\331{\226\344"- 00:09:35.643 [2024-07-21 18:24:53.685104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.643 [2024-07-21 18:24:53.685143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.643 [2024-07-21 18:24:53.685219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.643 [2024-07-21 18:24:53.685244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.643 [2024-07-21 18:24:53.685339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.643 [2024-07-21 18:24:53.685365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.643 #43 NEW cov: 12227 ft: 15295 corp: 22/641b lim: 50 exec/s: 21 rss: 74Mb L: 32/50 MS: 1 PersAutoDict- DE: "\000+\360\236\331{\226\344"- 00:09:35.643 #43 DONE cov: 12227 ft: 15295 corp: 22/641b lim: 50 exec/s: 21 rss: 74Mb 00:09:35.643 ###### Recommended dictionary. ###### 00:09:35.643 "\001\000\002\000" # Uses: 0 00:09:35.643 "\000+\360\236\331{\226\344" # Uses: 1 00:09:35.643 ###### End of recommended dictionary. ###### 00:09:35.643 Done 43 runs in 2 second(s) 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:35.901 18:24:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:09:35.901 [2024-07-21 18:24:53.910041] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:35.901 [2024-07-21 18:24:53.910115] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824999 ] 00:09:35.901 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.159 [2024-07-21 18:24:54.164428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.159 [2024-07-21 18:24:54.256070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.159 [2024-07-21 18:24:54.320316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.159 [2024-07-21 18:24:54.336546] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:36.159 INFO: Running with entropic power schedule (0xFF, 100). 00:09:36.159 INFO: Seed: 2748886712 00:09:36.417 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:36.417 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:36.417 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:36.417 INFO: A corpus is not provided, starting from an empty corpus 00:09:36.417 #2 INITED exec/s: 0 rss: 65Mb 00:09:36.417 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:36.417 This may also happen if the target rejected all inputs we tried so far 00:09:36.417 [2024-07-21 18:24:54.414064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:36.417 [2024-07-21 18:24:54.414109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.417 [2024-07-21 18:24:54.414223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:36.417 [2024-07-21 18:24:54.414242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.675 NEW_FUNC[1/699]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:36.675 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:36.675 #3 NEW cov: 12004 ft: 12004 corp: 2/36b lim: 85 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:09:36.933 [2024-07-21 18:24:54.895327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:36.933 [2024-07-21 18:24:54.895390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:54.895510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:36.933 [2024-07-21 18:24:54.895534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.933 #4 NEW cov: 12139 ft: 12638 corp: 3/71b lim: 85 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:09:36.933 [2024-07-21 18:24:54.965317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:36.933 [2024-07-21 18:24:54.965349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:54.965419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:36.933 [2024-07-21 18:24:54.965439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.933 #10 NEW cov: 12145 ft: 12857 corp: 4/106b lim: 85 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:09:36.933 [2024-07-21 18:24:55.016206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:36.933 [2024-07-21 18:24:55.016235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:55.016339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:36.933 [2024-07-21 18:24:55.016358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:55.016448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:36.933 [2024-07-21 18:24:55.016465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:55.016555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:36.933 [2024-07-21 18:24:55.016577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.933 #11 NEW cov: 12230 ft: 13581 corp: 5/183b lim: 85 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 InsertRepeatedBytes- 00:09:36.933 [2024-07-21 18:24:55.085814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:36.933 [2024-07-21 18:24:55.085840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:55.085923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:36.933 [2024-07-21 18:24:55.085943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.933 #17 NEW cov: 12230 ft: 13639 corp: 6/218b lim: 85 exec/s: 0 rss: 73Mb L: 35/77 MS: 1 ShuffleBytes- 00:09:36.933 [2024-07-21 18:24:55.136335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:36.933 [2024-07-21 18:24:55.136363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:55.136447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:36.933 [2024-07-21 18:24:55.136464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.933 [2024-07-21 18:24:55.136552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:36.933 [2024-07-21 18:24:55.136566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.191 #18 NEW cov: 12230 ft: 13963 corp: 7/284b lim: 85 exec/s: 0 rss: 73Mb L: 66/77 MS: 1 InsertRepeatedBytes- 00:09:37.191 [2024-07-21 18:24:55.186190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.191 [2024-07-21 18:24:55.186219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.191 [2024-07-21 18:24:55.186301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.191 [2024-07-21 18:24:55.186318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.191 #19 NEW cov: 12230 ft: 14127 corp: 8/319b lim: 85 exec/s: 0 rss: 73Mb L: 35/77 MS: 1 ChangeBinInt- 00:09:37.191 [2024-07-21 18:24:55.246872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.191 [2024-07-21 18:24:55.246902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.191 [2024-07-21 18:24:55.246993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.191 [2024-07-21 18:24:55.247011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.191 [2024-07-21 18:24:55.247110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.191 [2024-07-21 18:24:55.247128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.191 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:37.191 #20 NEW cov: 12253 ft: 14185 corp: 9/385b lim: 85 exec/s: 0 rss: 73Mb L: 66/77 MS: 1 ChangeByte- 00:09:37.191 [2024-07-21 18:24:55.316878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.191 [2024-07-21 18:24:55.316908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.191 [2024-07-21 18:24:55.316985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.191 [2024-07-21 18:24:55.317004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.191 #26 NEW cov: 12253 ft: 14238 corp: 10/422b lim: 85 exec/s: 0 rss: 73Mb L: 37/77 MS: 1 CMP- DE: "\377\377"- 00:09:37.191 [2024-07-21 18:24:55.366965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.191 [2024-07-21 18:24:55.366993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.191 [2024-07-21 18:24:55.367064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.191 [2024-07-21 18:24:55.367084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.191 #27 NEW cov: 12253 ft: 14274 corp: 11/457b lim: 85 exec/s: 27 rss: 73Mb L: 35/77 MS: 1 ChangeBit- 00:09:37.449 [2024-07-21 18:24:55.417301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.449 [2024-07-21 18:24:55.417329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.417401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.449 [2024-07-21 18:24:55.417418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.449 #28 NEW cov: 12253 ft: 14302 corp: 12/493b lim: 85 exec/s: 28 rss: 73Mb L: 36/77 MS: 1 InsertByte- 00:09:37.449 [2024-07-21 18:24:55.477538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.449 [2024-07-21 18:24:55.477565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.477649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.449 [2024-07-21 18:24:55.477667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.449 #29 NEW cov: 12253 ft: 14319 corp: 13/529b lim: 85 exec/s: 29 rss: 73Mb L: 36/77 MS: 1 InsertByte- 00:09:37.449 [2024-07-21 18:24:55.527906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.449 [2024-07-21 18:24:55.527936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.528008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.449 [2024-07-21 18:24:55.528028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.449 #30 NEW cov: 12253 ft: 14323 corp: 14/564b lim: 85 exec/s: 30 rss: 73Mb L: 35/77 MS: 1 CopyPart- 00:09:37.449 [2024-07-21 18:24:55.578073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.449 [2024-07-21 18:24:55.578104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.578182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.449 [2024-07-21 18:24:55.578200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.449 #31 NEW cov: 12253 ft: 14355 corp: 15/603b lim: 85 exec/s: 31 rss: 73Mb L: 39/77 MS: 1 PersAutoDict- DE: "\377\377"- 00:09:37.449 [2024-07-21 18:24:55.649219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.449 [2024-07-21 18:24:55.649251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.649360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.449 [2024-07-21 18:24:55.649380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.649472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.449 [2024-07-21 18:24:55.649494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.449 [2024-07-21 18:24:55.649589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:37.449 [2024-07-21 18:24:55.649606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.706 #32 NEW cov: 12253 ft: 14362 corp: 16/685b lim: 85 exec/s: 32 rss: 73Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:09:37.706 [2024-07-21 18:24:55.718835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.706 [2024-07-21 18:24:55.718866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.706 [2024-07-21 18:24:55.718946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.706 [2024-07-21 18:24:55.718967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.706 #33 NEW cov: 12253 ft: 14389 corp: 17/721b lim: 85 exec/s: 33 rss: 73Mb L: 36/82 MS: 1 ShuffleBytes- 00:09:37.706 [2024-07-21 18:24:55.789442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.706 [2024-07-21 18:24:55.789472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.706 [2024-07-21 18:24:55.789544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.706 [2024-07-21 18:24:55.789563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.706 #34 NEW cov: 12253 ft: 14395 corp: 18/760b lim: 85 exec/s: 34 rss: 74Mb L: 39/82 MS: 1 ChangeByte- 00:09:37.706 [2024-07-21 18:24:55.859865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.706 [2024-07-21 18:24:55.859894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.706 [2024-07-21 18:24:55.859976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.706 [2024-07-21 18:24:55.859993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.707 #35 NEW cov: 12253 ft: 14433 corp: 19/797b lim: 85 exec/s: 35 rss: 74Mb L: 37/82 MS: 1 InsertByte- 00:09:37.707 [2024-07-21 18:24:55.919960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.707 [2024-07-21 18:24:55.919989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.707 [2024-07-21 18:24:55.920071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.707 [2024-07-21 18:24:55.920090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.964 #36 NEW cov: 12253 ft: 14461 corp: 20/841b lim: 85 exec/s: 36 rss: 74Mb L: 44/82 MS: 1 EraseBytes- 00:09:37.964 [2024-07-21 18:24:55.980759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.964 [2024-07-21 18:24:55.980787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.964 [2024-07-21 18:24:55.980888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.964 [2024-07-21 18:24:55.980907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.964 [2024-07-21 18:24:55.980999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.964 [2024-07-21 18:24:55.981014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.964 #37 NEW cov: 12253 ft: 14477 corp: 21/907b lim: 85 exec/s: 37 rss: 74Mb L: 66/82 MS: 1 CMP- DE: "\371\304\375\034\240\360+\000"- 00:09:37.964 [2024-07-21 18:24:56.031040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.964 [2024-07-21 18:24:56.031068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.964 [2024-07-21 18:24:56.031157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.964 [2024-07-21 18:24:56.031175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.964 [2024-07-21 18:24:56.031257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.964 [2024-07-21 18:24:56.031276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.964 #43 NEW cov: 12253 ft: 14528 corp: 22/962b lim: 85 exec/s: 43 rss: 74Mb L: 55/82 MS: 1 CopyPart- 00:09:37.964 [2024-07-21 18:24:56.090932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.964 [2024-07-21 18:24:56.090961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.964 [2024-07-21 18:24:56.091040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.964 [2024-07-21 18:24:56.091057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.964 #44 NEW cov: 12253 ft: 14539 corp: 23/998b lim: 85 exec/s: 44 rss: 74Mb L: 36/82 MS: 1 CopyPart- 00:09:37.964 [2024-07-21 18:24:56.141109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.964 [2024-07-21 18:24:56.141136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.964 [2024-07-21 18:24:56.141220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.964 [2024-07-21 18:24:56.141237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.964 #45 NEW cov: 12253 ft: 14566 corp: 24/1034b lim: 85 exec/s: 45 rss: 74Mb L: 36/82 MS: 1 ChangeBinInt- 00:09:38.222 [2024-07-21 18:24:56.191581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.222 [2024-07-21 18:24:56.191609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.222 [2024-07-21 18:24:56.191700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.222 [2024-07-21 18:24:56.191718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.222 [2024-07-21 18:24:56.191811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.222 [2024-07-21 18:24:56.191832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.222 #46 NEW cov: 12253 ft: 14581 corp: 25/1100b lim: 85 exec/s: 46 rss: 74Mb L: 66/82 MS: 1 ChangeByte- 00:09:38.222 [2024-07-21 18:24:56.261140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.222 [2024-07-21 18:24:56.261168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.222 #47 NEW cov: 12253 ft: 15377 corp: 26/1127b lim: 85 exec/s: 47 rss: 74Mb L: 27/82 MS: 1 InsertRepeatedBytes- 00:09:38.222 [2024-07-21 18:24:56.322499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.222 [2024-07-21 18:24:56.322529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.222 [2024-07-21 18:24:56.322615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.222 [2024-07-21 18:24:56.322647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.222 [2024-07-21 18:24:56.322722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.222 [2024-07-21 18:24:56.322743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.222 [2024-07-21 18:24:56.322831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.222 [2024-07-21 18:24:56.322848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.222 #52 NEW cov: 12253 ft: 15393 corp: 27/1208b lim: 85 exec/s: 52 rss: 74Mb L: 81/82 MS: 5 CopyPart-ChangeByte-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:09:38.222 [2024-07-21 18:24:56.372045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.222 [2024-07-21 18:24:56.372073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.222 [2024-07-21 18:24:56.372165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.222 [2024-07-21 18:24:56.372183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.222 #53 NEW cov: 12253 ft: 15396 corp: 28/1247b lim: 85 exec/s: 26 rss: 74Mb L: 39/82 MS: 1 EraseBytes- 00:09:38.222 #53 DONE cov: 12253 ft: 15396 corp: 28/1247b lim: 85 exec/s: 26 rss: 74Mb 00:09:38.222 ###### Recommended dictionary. ###### 00:09:38.222 "\377\377" # Uses: 2 00:09:38.222 "\371\304\375\034\240\360+\000" # Uses: 0 00:09:38.222 ###### End of recommended dictionary. ###### 00:09:38.222 Done 53 runs in 2 second(s) 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:38.481 18:24:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:38.481 [2024-07-21 18:24:56.589592] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:38.481 [2024-07-21 18:24:56.589689] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825355 ] 00:09:38.481 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.739 [2024-07-21 18:24:56.846970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.739 [2024-07-21 18:24:56.935454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.997 [2024-07-21 18:24:56.999702] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.997 [2024-07-21 18:24:57.015946] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:38.997 INFO: Running with entropic power schedule (0xFF, 100). 00:09:38.997 INFO: Seed: 1132924814 00:09:38.997 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:38.997 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:38.997 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:38.997 INFO: A corpus is not provided, starting from an empty corpus 00:09:38.997 #2 INITED exec/s: 0 rss: 65Mb 00:09:38.997 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:38.997 This may also happen if the target rejected all inputs we tried so far 00:09:38.997 [2024-07-21 18:24:57.093987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:38.997 [2024-07-21 18:24:57.094030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.997 [2024-07-21 18:24:57.094109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:38.997 [2024-07-21 18:24:57.094133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.997 [2024-07-21 18:24:57.094196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:38.997 [2024-07-21 18:24:57.094222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.563 NEW_FUNC[1/695]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:39.563 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:39.563 #15 NEW cov: 11912 ft: 11931 corp: 2/16b lim: 25 exec/s: 0 rss: 72Mb L: 15/15 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:09:39.563 [2024-07-21 18:24:57.574777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.563 [2024-07-21 18:24:57.574825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.563 [2024-07-21 18:24:57.574910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.563 [2024-07-21 18:24:57.574929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.563 [2024-07-21 18:24:57.575031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.563 [2024-07-21 18:24:57.575051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.563 NEW_FUNC[1/3]: 0xffbfc0 in posix_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1455 00:09:39.563 NEW_FUNC[2/3]: 0x1ab4da0 in spdk_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:539 00:09:39.563 #16 NEW cov: 12072 ft: 12493 corp: 3/31b lim: 25 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 ShuffleBytes- 00:09:39.563 [2024-07-21 18:24:57.645157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.563 [2024-07-21 18:24:57.645187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.563 [2024-07-21 18:24:57.645292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.564 [2024-07-21 18:24:57.645312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.564 [2024-07-21 18:24:57.645409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.564 [2024-07-21 18:24:57.645427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.564 [2024-07-21 18:24:57.645514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:39.564 [2024-07-21 18:24:57.645533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.564 #17 NEW cov: 12078 ft: 13190 corp: 4/54b lim: 25 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CopyPart- 00:09:39.564 [2024-07-21 18:24:57.704482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.564 [2024-07-21 18:24:57.704517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.564 #21 NEW cov: 12163 ft: 13841 corp: 5/59b lim: 25 exec/s: 0 rss: 73Mb L: 5/23 MS: 4 CopyPart-CopyPart-CrossOver-CMP- DE: "\001\000\377\377"- 00:09:39.564 [2024-07-21 18:24:57.755540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.564 [2024-07-21 18:24:57.755568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.564 [2024-07-21 18:24:57.755690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.564 [2024-07-21 18:24:57.755709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.564 [2024-07-21 18:24:57.755803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.564 [2024-07-21 18:24:57.755818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.564 [2024-07-21 18:24:57.755911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:39.564 [2024-07-21 18:24:57.755928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.825 #24 NEW cov: 12163 ft: 13909 corp: 6/80b lim: 25 exec/s: 0 rss: 73Mb L: 21/23 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:09:39.825 [2024-07-21 18:24:57.805498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.825 [2024-07-21 18:24:57.805525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.805629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.825 [2024-07-21 18:24:57.805648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.805743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.825 [2024-07-21 18:24:57.805760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.825 #25 NEW cov: 12163 ft: 14012 corp: 7/95b lim: 25 exec/s: 0 rss: 73Mb L: 15/23 MS: 1 CopyPart- 00:09:39.825 [2024-07-21 18:24:57.855993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.825 [2024-07-21 18:24:57.856020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.856121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.825 [2024-07-21 18:24:57.856141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.856234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.825 [2024-07-21 18:24:57.856252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.856341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:39.825 [2024-07-21 18:24:57.856361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.825 #26 NEW cov: 12163 ft: 14101 corp: 8/118b lim: 25 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:39.825 [2024-07-21 18:24:57.925666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.825 [2024-07-21 18:24:57.925695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.925782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.825 [2024-07-21 18:24:57.925800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.825 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:39.825 #30 NEW cov: 12186 ft: 14392 corp: 9/128b lim: 25 exec/s: 0 rss: 73Mb L: 10/23 MS: 4 ChangeBit-InsertByte-ChangeBit-CMP- DE: "\377\003\000\000\000\000\000\000"- 00:09:39.825 [2024-07-21 18:24:57.976790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.825 [2024-07-21 18:24:57.976818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.976925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.825 [2024-07-21 18:24:57.976941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.976995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.825 [2024-07-21 18:24:57.977013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.977066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:39.825 [2024-07-21 18:24:57.977085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:57.977184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:39.825 [2024-07-21 18:24:57.977202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:39.825 #31 NEW cov: 12186 ft: 14446 corp: 10/153b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:09:39.825 [2024-07-21 18:24:58.026891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.825 [2024-07-21 18:24:58.026919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:58.027030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.825 [2024-07-21 18:24:58.027050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:58.027106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:39.825 [2024-07-21 18:24:58.027124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.825 [2024-07-21 18:24:58.027173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:39.825 [2024-07-21 18:24:58.027191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.826 [2024-07-21 18:24:58.027279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:39.826 [2024-07-21 18:24:58.027300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:40.159 #32 NEW cov: 12186 ft: 14483 corp: 11/178b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:09:40.159 [2024-07-21 18:24:58.077190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.159 [2024-07-21 18:24:58.077218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.077319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.159 [2024-07-21 18:24:58.077340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.077432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.159 [2024-07-21 18:24:58.077446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.077530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.159 [2024-07-21 18:24:58.077549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.159 #33 NEW cov: 12186 ft: 14573 corp: 12/199b lim: 25 exec/s: 33 rss: 73Mb L: 21/25 MS: 1 ChangeBinInt- 00:09:40.159 [2024-07-21 18:24:58.137447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.159 [2024-07-21 18:24:58.137473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.137592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.159 [2024-07-21 18:24:58.137611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.137708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.159 [2024-07-21 18:24:58.137723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.137814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.159 [2024-07-21 18:24:58.137835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.159 #34 NEW cov: 12186 ft: 14575 corp: 13/222b lim: 25 exec/s: 34 rss: 73Mb L: 23/25 MS: 1 ChangeBit- 00:09:40.159 [2024-07-21 18:24:58.187271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.159 [2024-07-21 18:24:58.187299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.187408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.159 [2024-07-21 18:24:58.187428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.187527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.159 [2024-07-21 18:24:58.187546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.159 #35 NEW cov: 12186 ft: 14656 corp: 14/237b lim: 25 exec/s: 35 rss: 73Mb L: 15/25 MS: 1 ChangeByte- 00:09:40.159 [2024-07-21 18:24:58.237781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.159 [2024-07-21 18:24:58.237811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.237927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.159 [2024-07-21 18:24:58.237946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.238039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.159 [2024-07-21 18:24:58.238058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.238155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.159 [2024-07-21 18:24:58.238178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.159 #36 NEW cov: 12186 ft: 14679 corp: 15/261b lim: 25 exec/s: 36 rss: 73Mb L: 24/25 MS: 1 InsertByte- 00:09:40.159 [2024-07-21 18:24:58.297799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.159 [2024-07-21 18:24:58.297828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.297925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.159 [2024-07-21 18:24:58.297942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.159 [2024-07-21 18:24:58.298039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.159 [2024-07-21 18:24:58.298056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.159 #37 NEW cov: 12186 ft: 14689 corp: 16/276b lim: 25 exec/s: 37 rss: 73Mb L: 15/25 MS: 1 ChangeByte- 00:09:40.418 [2024-07-21 18:24:58.358460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-21 18:24:58.358492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.358608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-21 18:24:58.358626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.358721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.419 [2024-07-21 18:24:58.358738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.358834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.419 [2024-07-21 18:24:58.358854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.419 #38 NEW cov: 12186 ft: 14700 corp: 17/300b lim: 25 exec/s: 38 rss: 73Mb L: 24/25 MS: 1 ChangeBinInt- 00:09:40.419 [2024-07-21 18:24:58.428808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-21 18:24:58.428837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.428938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-21 18:24:58.428959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.429058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.419 [2024-07-21 18:24:58.429081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.429174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.419 [2024-07-21 18:24:58.429195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.419 #39 NEW cov: 12186 ft: 14764 corp: 18/322b lim: 25 exec/s: 39 rss: 73Mb L: 22/25 MS: 1 EraseBytes- 00:09:40.419 [2024-07-21 18:24:58.498487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-21 18:24:58.498518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.498589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-21 18:24:58.498608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 #40 NEW cov: 12186 ft: 14787 corp: 19/334b lim: 25 exec/s: 40 rss: 74Mb L: 12/25 MS: 1 EraseBytes- 00:09:40.419 [2024-07-21 18:24:58.569013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-21 18:24:58.569042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.569150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-21 18:24:58.569171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.569268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.419 [2024-07-21 18:24:58.569284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.419 #41 NEW cov: 12186 ft: 14822 corp: 20/349b lim: 25 exec/s: 41 rss: 74Mb L: 15/25 MS: 1 ChangeASCIIInt- 00:09:40.419 [2024-07-21 18:24:58.629585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-21 18:24:58.629614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.629740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-21 18:24:58.629757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.629846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.419 [2024-07-21 18:24:58.629864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.419 [2024-07-21 18:24:58.629956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.419 [2024-07-21 18:24:58.629977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.678 #42 NEW cov: 12186 ft: 14897 corp: 21/372b lim: 25 exec/s: 42 rss: 74Mb L: 23/25 MS: 1 ChangeBinInt- 00:09:40.678 [2024-07-21 18:24:58.680195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.678 [2024-07-21 18:24:58.680226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.680350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.678 [2024-07-21 18:24:58.680370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.680430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.678 [2024-07-21 18:24:58.680450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.680484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.678 [2024-07-21 18:24:58.680504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.680601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:40.678 [2024-07-21 18:24:58.680622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:40.678 #43 NEW cov: 12186 ft: 14906 corp: 22/397b lim: 25 exec/s: 43 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:09:40.678 [2024-07-21 18:24:58.750180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.678 [2024-07-21 18:24:58.750215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.750309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.678 [2024-07-21 18:24:58.750328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.750428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.678 [2024-07-21 18:24:58.750444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.750536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.678 [2024-07-21 18:24:58.750555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.678 #44 NEW cov: 12186 ft: 14935 corp: 23/420b lim: 25 exec/s: 44 rss: 74Mb L: 23/25 MS: 1 CMP- DE: "d4\212\226\241\360+\000"- 00:09:40.678 [2024-07-21 18:24:58.800219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.678 [2024-07-21 18:24:58.800250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.800363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.678 [2024-07-21 18:24:58.800382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.800477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.678 [2024-07-21 18:24:58.800492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.678 #49 NEW cov: 12186 ft: 15011 corp: 24/437b lim: 25 exec/s: 49 rss: 74Mb L: 17/25 MS: 5 InsertByte-CopyPart-ChangeBinInt-EraseBytes-CrossOver- 00:09:40.678 [2024-07-21 18:24:58.850493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.678 [2024-07-21 18:24:58.850522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.850642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.678 [2024-07-21 18:24:58.850661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.678 [2024-07-21 18:24:58.850758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.678 [2024-07-21 18:24:58.850778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.678 #50 NEW cov: 12186 ft: 15023 corp: 25/453b lim: 25 exec/s: 50 rss: 74Mb L: 16/25 MS: 1 InsertByte- 00:09:40.937 [2024-07-21 18:24:58.901267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.937 [2024-07-21 18:24:58.901296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.937 [2024-07-21 18:24:58.901397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.937 [2024-07-21 18:24:58.901418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.937 [2024-07-21 18:24:58.901518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.937 [2024-07-21 18:24:58.901535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.937 [2024-07-21 18:24:58.901618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.937 [2024-07-21 18:24:58.901636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.937 [2024-07-21 18:24:58.901720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:40.937 [2024-07-21 18:24:58.901739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:40.938 #51 NEW cov: 12186 ft: 15039 corp: 26/478b lim: 25 exec/s: 51 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:09:40.938 [2024-07-21 18:24:58.970307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.938 [2024-07-21 18:24:58.970335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.938 #52 NEW cov: 12186 ft: 15062 corp: 27/486b lim: 25 exec/s: 52 rss: 74Mb L: 8/25 MS: 1 CopyPart- 00:09:40.938 [2024-07-21 18:24:59.031542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.938 [2024-07-21 18:24:59.031569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.938 [2024-07-21 18:24:59.031664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.938 [2024-07-21 18:24:59.031682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.938 [2024-07-21 18:24:59.031782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.938 [2024-07-21 18:24:59.031798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.938 [2024-07-21 18:24:59.031898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.938 [2024-07-21 18:24:59.031918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.938 #53 NEW cov: 12186 ft: 15076 corp: 28/508b lim: 25 exec/s: 26 rss: 74Mb L: 22/25 MS: 1 CrossOver- 00:09:40.938 #53 DONE cov: 12186 ft: 15076 corp: 28/508b lim: 25 exec/s: 26 rss: 74Mb 00:09:40.938 ###### Recommended dictionary. ###### 00:09:40.938 "\001\000\377\377" # Uses: 0 00:09:40.938 "\000\000\000\000" # Uses: 0 00:09:40.938 "\377\003\000\000\000\000\000\000" # Uses: 0 00:09:40.938 "d4\212\226\241\360+\000" # Uses: 0 00:09:40.938 ###### End of recommended dictionary. ###### 00:09:40.938 Done 53 runs in 2 second(s) 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:41.196 18:24:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:41.196 [2024-07-21 18:24:59.259074] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:41.196 [2024-07-21 18:24:59.259151] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825718 ] 00:09:41.196 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.455 [2024-07-21 18:24:59.516566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.455 [2024-07-21 18:24:59.607904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.713 [2024-07-21 18:24:59.671972] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.713 [2024-07-21 18:24:59.688201] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:41.713 INFO: Running with entropic power schedule (0xFF, 100). 00:09:41.713 INFO: Seed: 3804922086 00:09:41.713 INFO: Loaded 1 modules (358600 inline 8-bit counters): 358600 [0x29bce0c, 0x2a146d4), 00:09:41.713 INFO: Loaded 1 PC tables (358600 PCs): 358600 [0x2a146d8,0x2f8d358), 00:09:41.713 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:41.713 INFO: A corpus is not provided, starting from an empty corpus 00:09:41.713 #2 INITED exec/s: 0 rss: 65Mb 00:09:41.713 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:41.713 This may also happen if the target rejected all inputs we tried so far 00:09:41.713 [2024-07-21 18:24:59.743798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.713 [2024-07-21 18:24:59.743840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.713 [2024-07-21 18:24:59.743909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.713 [2024-07-21 18:24:59.743932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.279 NEW_FUNC[1/699]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:42.279 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:42.279 #4 NEW cov: 12014 ft: 12015 corp: 2/46b lim: 100 exec/s: 0 rss: 72Mb L: 45/45 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:42.279 [2024-07-21 18:25:00.266288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1374463286289044243 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.266347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.279 #7 NEW cov: 12144 ft: 13341 corp: 3/74b lim: 100 exec/s: 0 rss: 72Mb L: 28/45 MS: 3 ChangeBit-ChangeByte-CrossOver- 00:09:42.279 [2024-07-21 18:25:00.337159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.337202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.279 [2024-07-21 18:25:00.337302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.337323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.279 [2024-07-21 18:25:00.337424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.337447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.279 #14 NEW cov: 12150 ft: 13929 corp: 4/136b lim: 100 exec/s: 0 rss: 72Mb L: 62/62 MS: 2 CMP-InsertRepeatedBytes- DE: "\001\000\000\000\000\000\000\000"- 00:09:42.279 [2024-07-21 18:25:00.397907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.397946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.279 [2024-07-21 18:25:00.398015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3705461980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.398042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.279 [2024-07-21 18:25:00.398110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.398133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.279 [2024-07-21 18:25:00.398230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.398256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.279 #15 NEW cov: 12235 ft: 14512 corp: 5/230b lim: 100 exec/s: 0 rss: 72Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:09:42.279 [2024-07-21 18:25:00.487751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.487796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.279 [2024-07-21 18:25:00.487885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.279 [2024-07-21 18:25:00.487911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.537 #17 NEW cov: 12235 ft: 14569 corp: 6/281b lim: 100 exec/s: 0 rss: 72Mb L: 51/94 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:42.537 [2024-07-21 18:25:00.548867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.548907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.548968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.548990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.549053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.549078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.549169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.549194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.537 #18 NEW cov: 12235 ft: 14632 corp: 7/367b lim: 100 exec/s: 0 rss: 72Mb L: 86/94 MS: 1 InsertRepeatedBytes- 00:09:42.537 [2024-07-21 18:25:00.639420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.639459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.639519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3705461980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.639543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.639592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.639615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.639712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.639737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.537 NEW_FUNC[1/1]: 0x1a85a40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:42.537 #19 NEW cov: 12258 ft: 14734 corp: 8/461b lim: 100 exec/s: 0 rss: 73Mb L: 94/94 MS: 1 ChangeBinInt- 00:09:42.537 [2024-07-21 18:25:00.729714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.729753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.729834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15914838021392882908 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.729856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.537 [2024-07-21 18:25:00.729953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.537 [2024-07-21 18:25:00.729978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.795 #20 NEW cov: 12258 ft: 14848 corp: 9/531b lim: 100 exec/s: 20 rss: 73Mb L: 70/94 MS: 1 CMP- DE: "\257\013\226\245\242\360+\000"- 00:09:42.795 [2024-07-21 18:25:00.790424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12153149034297075880 len:43177 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.790463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.795 [2024-07-21 18:25:00.790531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12153149036796881064 len:43177 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.790557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.795 [2024-07-21 18:25:00.790623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12153149036796881064 len:43177 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.790648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.795 [2024-07-21 18:25:00.790747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12153149036796881064 len:43177 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.790772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.795 #22 NEW cov: 12258 ft: 14936 corp: 10/628b lim: 100 exec/s: 22 rss: 73Mb L: 97/97 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:42.795 [2024-07-21 18:25:00.849924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.849962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.795 [2024-07-21 18:25:00.850049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3705461980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.850072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.795 #23 NEW cov: 12258 ft: 14970 corp: 11/686b lim: 100 exec/s: 23 rss: 73Mb L: 58/97 MS: 1 EraseBytes- 00:09:42.795 [2024-07-21 18:25:00.930834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.930872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.795 [2024-07-21 18:25:00.930940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15914838021392882908 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.930966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.795 [2024-07-21 18:25:00.931035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.795 [2024-07-21 18:25:00.931059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.795 #24 NEW cov: 12258 ft: 15000 corp: 12/756b lim: 100 exec/s: 24 rss: 73Mb L: 70/97 MS: 1 ShuffleBytes- 00:09:43.052 [2024-07-21 18:25:01.011120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527495317221260219 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.011159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.011232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.011258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.011340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.011362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.052 #25 NEW cov: 12258 ft: 15020 corp: 13/821b lim: 100 exec/s: 25 rss: 73Mb L: 65/97 MS: 1 InsertRepeatedBytes- 00:09:43.052 [2024-07-21 18:25:01.071561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.071599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.071659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15914838021392882908 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.071683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.071745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.071769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.052 #26 NEW cov: 12258 ft: 15044 corp: 14/891b lim: 100 exec/s: 26 rss: 73Mb L: 70/97 MS: 1 ChangeByte- 00:09:43.052 [2024-07-21 18:25:01.131804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527495317221260219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.131842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.131910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.131932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.132011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.132037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.052 #27 NEW cov: 12258 ft: 15061 corp: 15/956b lim: 100 exec/s: 27 rss: 73Mb L: 65/97 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:09:43.052 [2024-07-21 18:25:01.212051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527495317221260219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.212089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.212152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.212178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.052 [2024-07-21 18:25:01.212262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.052 [2024-07-21 18:25:01.212287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.310 #28 NEW cov: 12258 ft: 15081 corp: 16/1018b lim: 100 exec/s: 28 rss: 73Mb L: 62/97 MS: 1 CrossOver- 00:09:43.310 [2024-07-21 18:25:01.302749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527495317221260219 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.302788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.302856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13527612320720337851 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.302881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.302944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65468 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.302969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.303069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.303092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.310 #29 NEW cov: 12258 ft: 15121 corp: 17/1106b lim: 100 exec/s: 29 rss: 73Mb L: 88/97 MS: 1 InsertRepeatedBytes- 00:09:43.310 [2024-07-21 18:25:01.361864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1374463286289044243 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.361902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.310 #30 NEW cov: 12258 ft: 15145 corp: 18/1134b lim: 100 exec/s: 30 rss: 74Mb L: 28/97 MS: 1 ChangeByte- 00:09:43.310 [2024-07-21 18:25:01.442507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.442545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.442653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3705461980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.442680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.310 #31 NEW cov: 12258 ft: 15168 corp: 19/1192b lim: 100 exec/s: 31 rss: 74Mb L: 58/97 MS: 1 CrossOver- 00:09:43.310 [2024-07-21 18:25:01.523456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.523493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.523571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3705461980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.523595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.523689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:973078528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.523717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.310 [2024-07-21 18:25:01.523817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.310 [2024-07-21 18:25:01.523841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.567 #32 NEW cov: 12258 ft: 15219 corp: 20/1286b lim: 100 exec/s: 32 rss: 74Mb L: 94/97 MS: 1 ChangeByte- 00:09:43.567 [2024-07-21 18:25:01.593038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13527495317221260219 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.567 [2024-07-21 18:25:01.593077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.567 [2024-07-21 18:25:01.593140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.567 [2024-07-21 18:25:01.593164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.567 #33 NEW cov: 12258 ft: 15282 corp: 21/1326b lim: 100 exec/s: 33 rss: 74Mb L: 40/97 MS: 1 EraseBytes- 00:09:43.567 [2024-07-21 18:25:01.653651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.567 [2024-07-21 18:25:01.653690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.567 [2024-07-21 18:25:01.653782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15914838021392882908 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.567 [2024-07-21 18:25:01.653807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.567 [2024-07-21 18:25:01.653912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.567 [2024-07-21 18:25:01.653939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.567 #34 NEW cov: 12258 ft: 15318 corp: 22/1396b lim: 100 exec/s: 34 rss: 74Mb L: 70/97 MS: 1 CopyPart- 00:09:43.567 [2024-07-21 18:25:01.713743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.567 [2024-07-21 18:25:01.713781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.568 [2024-07-21 18:25:01.713849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15914838024376868060 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.568 [2024-07-21 18:25:01.713872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.568 [2024-07-21 18:25:01.713940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15914838024376867841 len:56541 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.568 [2024-07-21 18:25:01.713963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.568 #35 NEW cov: 12258 ft: 15332 corp: 23/1466b lim: 100 exec/s: 17 rss: 74Mb L: 70/97 MS: 1 CopyPart- 00:09:43.568 #35 DONE cov: 12258 ft: 15332 corp: 23/1466b lim: 100 exec/s: 17 rss: 74Mb 00:09:43.568 ###### Recommended dictionary. ###### 00:09:43.568 "\001\000\000\000\000\000\000\000" # Uses: 1 00:09:43.568 "\257\013\226\245\242\360+\000" # Uses: 0 00:09:43.568 ###### End of recommended dictionary. ###### 00:09:43.568 Done 35 runs in 2 second(s) 00:09:43.825 18:25:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:43.825 18:25:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:43.825 18:25:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:43.825 18:25:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:43.825 00:09:43.825 real 1m7.673s 00:09:43.825 user 1m39.439s 00:09:43.825 sys 0m8.652s 00:09:43.825 18:25:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.825 18:25:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:43.825 ************************************ 00:09:43.825 END TEST nvmf_llvm_fuzz 00:09:43.825 ************************************ 00:09:43.825 18:25:01 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:09:43.825 18:25:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:09:43.825 18:25:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:09:43.825 18:25:01 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:43.825 18:25:01 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:43.825 18:25:01 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.825 18:25:01 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:43.825 ************************************ 00:09:43.825 START TEST vfio_llvm_fuzz 00:09:43.825 ************************************ 00:09:43.825 18:25:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:44.085 * Looking for test storage... 00:09:44.085 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:44.085 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:44.085 #define SPDK_CONFIG_H 00:09:44.085 #define SPDK_CONFIG_APPS 1 00:09:44.085 #define SPDK_CONFIG_ARCH native 00:09:44.085 #undef SPDK_CONFIG_ASAN 00:09:44.085 #undef SPDK_CONFIG_AVAHI 00:09:44.085 #undef SPDK_CONFIG_CET 00:09:44.085 #define SPDK_CONFIG_COVERAGE 1 00:09:44.085 #define SPDK_CONFIG_CROSS_PREFIX 00:09:44.085 #undef SPDK_CONFIG_CRYPTO 00:09:44.085 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:44.085 #undef SPDK_CONFIG_CUSTOMOCF 00:09:44.085 #undef SPDK_CONFIG_DAOS 00:09:44.085 #define SPDK_CONFIG_DAOS_DIR 00:09:44.085 #define SPDK_CONFIG_DEBUG 1 00:09:44.085 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:44.085 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:44.085 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:44.085 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:44.085 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:44.085 #undef SPDK_CONFIG_DPDK_UADK 00:09:44.085 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:44.086 #define SPDK_CONFIG_EXAMPLES 1 00:09:44.086 #undef SPDK_CONFIG_FC 00:09:44.086 #define SPDK_CONFIG_FC_PATH 00:09:44.086 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:44.086 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:44.086 #undef SPDK_CONFIG_FUSE 00:09:44.086 #define SPDK_CONFIG_FUZZER 1 00:09:44.086 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:44.086 #undef SPDK_CONFIG_GOLANG 00:09:44.086 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:44.086 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:44.086 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:44.086 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:44.086 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:44.086 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:44.086 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:44.086 #define SPDK_CONFIG_IDXD 1 00:09:44.086 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:44.086 #undef SPDK_CONFIG_IPSEC_MB 00:09:44.086 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:44.086 #define SPDK_CONFIG_ISAL 1 00:09:44.086 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:44.086 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:44.086 #define SPDK_CONFIG_LIBDIR 00:09:44.086 #undef SPDK_CONFIG_LTO 00:09:44.086 #define SPDK_CONFIG_MAX_LCORES 128 00:09:44.086 #define SPDK_CONFIG_NVME_CUSE 1 00:09:44.086 #undef SPDK_CONFIG_OCF 00:09:44.086 #define SPDK_CONFIG_OCF_PATH 00:09:44.086 #define SPDK_CONFIG_OPENSSL_PATH 00:09:44.086 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:44.086 #define SPDK_CONFIG_PGO_DIR 00:09:44.086 #undef SPDK_CONFIG_PGO_USE 00:09:44.086 #define SPDK_CONFIG_PREFIX /usr/local 00:09:44.086 #undef SPDK_CONFIG_RAID5F 00:09:44.086 #undef SPDK_CONFIG_RBD 00:09:44.086 #define SPDK_CONFIG_RDMA 1 00:09:44.086 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:44.086 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:44.086 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:44.086 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:44.086 #undef SPDK_CONFIG_SHARED 00:09:44.086 #undef SPDK_CONFIG_SMA 00:09:44.086 #define SPDK_CONFIG_TESTS 1 00:09:44.086 #undef SPDK_CONFIG_TSAN 00:09:44.086 #define SPDK_CONFIG_UBLK 1 00:09:44.086 #define SPDK_CONFIG_UBSAN 1 00:09:44.086 #undef SPDK_CONFIG_UNIT_TESTS 00:09:44.086 #undef SPDK_CONFIG_URING 00:09:44.086 #define SPDK_CONFIG_URING_PATH 00:09:44.086 #undef SPDK_CONFIG_URING_ZNS 00:09:44.086 #undef SPDK_CONFIG_USDT 00:09:44.086 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:44.086 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:44.086 #define SPDK_CONFIG_VFIO_USER 1 00:09:44.086 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:44.086 #define SPDK_CONFIG_VHOST 1 00:09:44.086 #define SPDK_CONFIG_VIRTIO 1 00:09:44.086 #undef SPDK_CONFIG_VTUNE 00:09:44.086 #define SPDK_CONFIG_VTUNE_DIR 00:09:44.086 #define SPDK_CONFIG_WERROR 1 00:09:44.086 #define SPDK_CONFIG_WPDK_DIR 00:09:44.086 #undef SPDK_CONFIG_XNVME 00:09:44.086 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:44.086 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:44.087 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3826110 ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 3826110 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.LjoAMz 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.LjoAMz/tests/vfio /tmp/spdk.LjoAMz 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=86157582336 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508572672 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=8350990336 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895630336 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6086656 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253045248 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1241088 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:44.088 * Looking for test storage... 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=86157582336 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10565582848 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.088 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:44.088 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:44.089 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:44.089 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:44.347 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:44.347 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:44.347 18:25:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:44.347 [2024-07-21 18:25:02.331643] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:44.347 [2024-07-21 18:25:02.331723] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826160 ] 00:09:44.347 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.347 [2024-07-21 18:25:02.443478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.347 [2024-07-21 18:25:02.544926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.605 INFO: Running with entropic power schedule (0xFF, 100). 00:09:44.605 INFO: Seed: 2566963260 00:09:44.605 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:09:44.605 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:09:44.605 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:44.605 INFO: A corpus is not provided, starting from an empty corpus 00:09:44.605 #2 INITED exec/s: 0 rss: 66Mb 00:09:44.605 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:44.605 This may also happen if the target rejected all inputs we tried so far 00:09:44.862 [2024-07-21 18:25:02.833608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:45.120 NEW_FUNC[1/655]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:45.120 NEW_FUNC[2/655]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:45.120 #4 NEW cov: 10925 ft: 10935 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:45.379 NEW_FUNC[1/5]: 0x1422cb0 in post_completion /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1752 00:09:45.379 NEW_FUNC[2/5]: 0x142afe0 in cq_is_full /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1723 00:09:45.379 #20 NEW cov: 11004 ft: 14591 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:09:45.636 #23 NEW cov: 11011 ft: 14974 corp: 4/19b lim: 6 exec/s: 23 rss: 73Mb L: 6/6 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:09:45.895 #26 NEW cov: 11011 ft: 15435 corp: 5/25b lim: 6 exec/s: 26 rss: 74Mb L: 6/6 MS: 3 CopyPart-CopyPart-CrossOver- 00:09:46.154 #32 NEW cov: 11011 ft: 15723 corp: 6/31b lim: 6 exec/s: 32 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:46.413 #38 NEW cov: 11011 ft: 15907 corp: 7/37b lim: 6 exec/s: 38 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:09:46.671 #39 NEW cov: 11018 ft: 16500 corp: 8/43b lim: 6 exec/s: 39 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:46.928 #40 NEW cov: 11018 ft: 16697 corp: 9/49b lim: 6 exec/s: 20 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:09:46.928 #40 DONE cov: 11018 ft: 16697 corp: 9/49b lim: 6 exec/s: 20 rss: 74Mb 00:09:46.928 Done 40 runs in 2 second(s) 00:09:46.928 [2024-07-21 18:25:05.041483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:47.204 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:47.204 18:25:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:47.204 [2024-07-21 18:25:05.408429] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:47.204 [2024-07-21 18:25:05.408516] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826669 ] 00:09:47.462 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.462 [2024-07-21 18:25:05.539467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.462 [2024-07-21 18:25:05.643169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.720 INFO: Running with entropic power schedule (0xFF, 100). 00:09:47.720 INFO: Seed: 1360994435 00:09:47.720 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:09:47.720 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:09:47.720 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:47.720 INFO: A corpus is not provided, starting from an empty corpus 00:09:47.720 #2 INITED exec/s: 0 rss: 66Mb 00:09:47.720 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:47.720 This may also happen if the target rejected all inputs we tried so far 00:09:47.720 [2024-07-21 18:25:05.919561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:47.977 [2024-07-21 18:25:05.987438] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:47.977 [2024-07-21 18:25:05.987469] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:47.977 [2024-07-21 18:25:05.987496] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:48.542 NEW_FUNC[1/660]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:48.542 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:48.542 #60 NEW cov: 10957 ft: 10931 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 3 InsertByte-CrossOver-InsertByte- 00:09:48.542 [2024-07-21 18:25:06.659375] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:48.542 [2024-07-21 18:25:06.659428] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:48.542 [2024-07-21 18:25:06.659456] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:48.799 NEW_FUNC[1/2]: 0x1311fe0 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:734 00:09:48.799 NEW_FUNC[2/2]: 0x1a51f70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:48.799 #66 NEW cov: 10998 ft: 14356 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:48.799 [2024-07-21 18:25:06.918045] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:48.799 [2024-07-21 18:25:06.918079] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:48.799 [2024-07-21 18:25:06.918103] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.056 #69 NEW cov: 10998 ft: 14614 corp: 4/13b lim: 4 exec/s: 69 rss: 74Mb L: 4/4 MS: 3 CrossOver-ChangeBit-CopyPart- 00:09:49.056 [2024-07-21 18:25:07.163250] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.056 [2024-07-21 18:25:07.163280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.056 [2024-07-21 18:25:07.163306] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.314 #70 NEW cov: 10998 ft: 14767 corp: 5/17b lim: 4 exec/s: 70 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:09:49.314 [2024-07-21 18:25:07.403680] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.314 [2024-07-21 18:25:07.403709] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.314 [2024-07-21 18:25:07.403733] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.571 #71 NEW cov: 10998 ft: 15577 corp: 6/21b lim: 4 exec/s: 71 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:49.571 [2024-07-21 18:25:07.645915] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.571 [2024-07-21 18:25:07.645944] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.571 [2024-07-21 18:25:07.645969] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.828 #72 NEW cov: 11005 ft: 16089 corp: 7/25b lim: 4 exec/s: 72 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:09:49.828 [2024-07-21 18:25:07.891164] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.828 [2024-07-21 18:25:07.891194] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.828 [2024-07-21 18:25:07.891229] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.828 #73 NEW cov: 11005 ft: 16366 corp: 8/29b lim: 4 exec/s: 36 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:09:49.828 #73 DONE cov: 11005 ft: 16366 corp: 8/29b lim: 4 exec/s: 36 rss: 74Mb 00:09:49.828 Done 73 runs in 2 second(s) 00:09:50.085 [2024-07-21 18:25:08.067489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:50.342 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:50.343 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:50.343 18:25:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:50.343 [2024-07-21 18:25:08.426990] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:50.343 [2024-07-21 18:25:08.427071] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827032 ] 00:09:50.343 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.343 [2024-07-21 18:25:08.538629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.599 [2024-07-21 18:25:08.641602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.856 INFO: Running with entropic power schedule (0xFF, 100). 00:09:50.856 INFO: Seed: 72016060 00:09:50.856 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:09:50.856 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:09:50.856 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:50.856 INFO: A corpus is not provided, starting from an empty corpus 00:09:50.856 #2 INITED exec/s: 0 rss: 66Mb 00:09:50.856 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:50.856 This may also happen if the target rejected all inputs we tried so far 00:09:50.856 [2024-07-21 18:25:08.920135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:50.856 [2024-07-21 18:25:09.001440] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.370 NEW_FUNC[1/660]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:51.370 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:51.370 #17 NEW cov: 10948 ft: 10876 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 5 CopyPart-ChangeByte-CopyPart-CopyPart-InsertRepeatedBytes- 00:09:51.628 [2024-07-21 18:25:09.685693] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.628 NEW_FUNC[1/1]: 0x1a51f70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:51.628 #18 NEW cov: 10981 ft: 13951 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:51.886 [2024-07-21 18:25:09.927682] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.886 #24 NEW cov: 10981 ft: 15029 corp: 4/25b lim: 8 exec/s: 24 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:52.144 [2024-07-21 18:25:10.166642] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.144 #25 NEW cov: 10981 ft: 16441 corp: 5/33b lim: 8 exec/s: 25 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:09:52.402 [2024-07-21 18:25:10.393186] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.402 #26 NEW cov: 10981 ft: 16820 corp: 6/41b lim: 8 exec/s: 26 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:09:52.659 [2024-07-21 18:25:10.620406] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.659 #32 NEW cov: 10988 ft: 17064 corp: 7/49b lim: 8 exec/s: 32 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:52.659 [2024-07-21 18:25:10.829993] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.918 #38 NEW cov: 10988 ft: 17309 corp: 8/57b lim: 8 exec/s: 19 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:09:52.918 #38 DONE cov: 10988 ft: 17309 corp: 8/57b lim: 8 exec/s: 19 rss: 75Mb 00:09:52.918 Done 38 runs in 2 second(s) 00:09:52.918 [2024-07-21 18:25:10.959478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:09:53.176 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:53.176 18:25:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:09:53.176 [2024-07-21 18:25:11.328471] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:53.176 [2024-07-21 18:25:11.328561] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827392 ] 00:09:53.176 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.435 [2024-07-21 18:25:11.449730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.435 [2024-07-21 18:25:11.554190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.694 INFO: Running with entropic power schedule (0xFF, 100). 00:09:53.694 INFO: Seed: 2973025867 00:09:53.694 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:09:53.694 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:09:53.694 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:53.694 INFO: A corpus is not provided, starting from an empty corpus 00:09:53.694 #2 INITED exec/s: 0 rss: 66Mb 00:09:53.694 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:53.694 This may also happen if the target rejected all inputs we tried so far 00:09:53.694 [2024-07-21 18:25:11.824657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:09:54.518 NEW_FUNC[1/660]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:09:54.518 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:54.518 #262 NEW cov: 10953 ft: 10926 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 InsertByte-InsertRepeatedBytes-CopyPart-ChangeByte-CMP- DE: "\000\000\000\000\000\000\000 "- 00:09:54.518 NEW_FUNC[1/1]: 0x1a51f70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:54.518 #263 NEW cov: 10986 ft: 14149 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:09:54.777 #264 NEW cov: 10988 ft: 14456 corp: 4/97b lim: 32 exec/s: 264 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:55.034 #265 NEW cov: 10988 ft: 14930 corp: 5/129b lim: 32 exec/s: 265 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:55.291 #266 NEW cov: 10988 ft: 15765 corp: 6/161b lim: 32 exec/s: 266 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:55.550 #267 NEW cov: 10995 ft: 15871 corp: 7/193b lim: 32 exec/s: 267 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:55.809 #268 NEW cov: 10995 ft: 15984 corp: 8/225b lim: 32 exec/s: 134 rss: 74Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000 "- 00:09:55.809 #268 DONE cov: 10995 ft: 15984 corp: 8/225b lim: 32 exec/s: 134 rss: 74Mb 00:09:55.809 ###### Recommended dictionary. ###### 00:09:55.809 "\000\000\000\000\000\000\000 " # Uses: 1 00:09:55.809 ###### End of recommended dictionary. ###### 00:09:55.809 Done 268 runs in 2 second(s) 00:09:55.809 [2024-07-21 18:25:13.828469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:09:56.068 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:56.068 18:25:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:09:56.068 [2024-07-21 18:25:14.194508] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:56.068 [2024-07-21 18:25:14.194591] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827755 ] 00:09:56.068 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.327 [2024-07-21 18:25:14.326476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.327 [2024-07-21 18:25:14.431443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.586 INFO: Running with entropic power schedule (0xFF, 100). 00:09:56.586 INFO: Seed: 1556059120 00:09:56.586 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:09:56.586 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:09:56.586 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:56.586 INFO: A corpus is not provided, starting from an empty corpus 00:09:56.586 #2 INITED exec/s: 0 rss: 66Mb 00:09:56.586 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:56.586 This may also happen if the target rejected all inputs we tried so far 00:09:56.586 [2024-07-21 18:25:14.704011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:09:57.412 NEW_FUNC[1/659]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:09:57.412 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:57.412 #244 NEW cov: 10957 ft: 10931 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:09:57.412 NEW_FUNC[1/2]: 0x177db20 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1157 00:09:57.412 NEW_FUNC[2/2]: 0x1a51f70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:57.412 #245 NEW cov: 10991 ft: 13557 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:57.670 #246 NEW cov: 10991 ft: 14749 corp: 4/97b lim: 32 exec/s: 246 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:09:57.929 #252 NEW cov: 10991 ft: 15475 corp: 5/129b lim: 32 exec/s: 252 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:58.214 #253 NEW cov: 10991 ft: 15780 corp: 6/161b lim: 32 exec/s: 253 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:09:58.472 #254 NEW cov: 10998 ft: 15940 corp: 7/193b lim: 32 exec/s: 254 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:58.729 #255 NEW cov: 10998 ft: 16014 corp: 8/225b lim: 32 exec/s: 127 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:09:58.729 #255 DONE cov: 10998 ft: 16014 corp: 8/225b lim: 32 exec/s: 127 rss: 74Mb 00:09:58.729 Done 255 runs in 2 second(s) 00:09:58.729 [2024-07-21 18:25:16.870481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:58.987 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:58.987 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:59.246 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:59.246 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:59.246 18:25:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:59.246 [2024-07-21 18:25:17.236089] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:59.246 [2024-07-21 18:25:17.236185] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828121 ] 00:09:59.246 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.246 [2024-07-21 18:25:17.364199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.504 [2024-07-21 18:25:17.470349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.504 INFO: Running with entropic power schedule (0xFF, 100). 00:09:59.504 INFO: Seed: 316089270 00:09:59.504 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:09:59.504 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:09:59.504 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:59.504 INFO: A corpus is not provided, starting from an empty corpus 00:09:59.504 #2 INITED exec/s: 0 rss: 66Mb 00:09:59.504 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:59.504 This may also happen if the target rejected all inputs we tried so far 00:09:59.761 [2024-07-21 18:25:17.754077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:09:59.761 [2024-07-21 18:25:17.808300] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:59.761 [2024-07-21 18:25:17.808347] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.337 NEW_FUNC[1/661]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:10:00.337 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:00.337 #47 NEW cov: 10969 ft: 10848 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 5 ChangeBinInt-ChangeByte-InsertRepeatedBytes-CopyPart-CrossOver- 00:10:00.337 [2024-07-21 18:25:18.464745] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.337 [2024-07-21 18:25:18.464800] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.595 NEW_FUNC[1/1]: 0x1a51f70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:00.595 #50 NEW cov: 11000 ft: 14113 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 3 CrossOver-CrossOver-CopyPart- 00:10:00.595 [2024-07-21 18:25:18.706694] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.595 [2024-07-21 18:25:18.706738] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.852 #51 NEW cov: 11000 ft: 15447 corp: 4/40b lim: 13 exec/s: 51 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\001\000"- 00:10:00.852 [2024-07-21 18:25:18.924148] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.852 [2024-07-21 18:25:18.924190] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.852 #52 NEW cov: 11000 ft: 16343 corp: 5/53b lim: 13 exec/s: 52 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:01.110 [2024-07-21 18:25:19.154003] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:01.110 [2024-07-21 18:25:19.154045] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:01.110 #56 NEW cov: 11000 ft: 16497 corp: 6/66b lim: 13 exec/s: 56 rss: 74Mb L: 13/13 MS: 4 InsertRepeatedBytes-ChangeBinInt-ChangeBit-InsertByte- 00:10:01.367 [2024-07-21 18:25:19.378704] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:01.367 [2024-07-21 18:25:19.378743] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:01.367 #57 NEW cov: 11007 ft: 16775 corp: 7/79b lim: 13 exec/s: 57 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:01.625 [2024-07-21 18:25:19.595233] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:01.625 [2024-07-21 18:25:19.595273] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:01.625 #58 NEW cov: 11007 ft: 17378 corp: 8/92b lim: 13 exec/s: 29 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:10:01.625 #58 DONE cov: 11007 ft: 17378 corp: 8/92b lim: 13 exec/s: 29 rss: 74Mb 00:10:01.625 ###### Recommended dictionary. ###### 00:10:01.625 "\001\000" # Uses: 0 00:10:01.625 ###### End of recommended dictionary. ###### 00:10:01.625 Done 58 runs in 2 second(s) 00:10:01.626 [2024-07-21 18:25:19.751468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:10:01.884 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:01.884 18:25:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:10:02.143 [2024-07-21 18:25:20.118003] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:02.143 [2024-07-21 18:25:20.118085] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828537 ] 00:10:02.143 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.143 [2024-07-21 18:25:20.249104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.143 [2024-07-21 18:25:20.346269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.402 INFO: Running with entropic power schedule (0xFF, 100). 00:10:02.402 INFO: Seed: 3168094850 00:10:02.402 INFO: Loaded 1 modules (355836 inline 8-bit counters): 355836 [0x297d60c, 0x29d4408), 00:10:02.402 INFO: Loaded 1 PC tables (355836 PCs): 355836 [0x29d4408,0x2f423c8), 00:10:02.402 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:02.402 INFO: A corpus is not provided, starting from an empty corpus 00:10:02.402 #2 INITED exec/s: 0 rss: 66Mb 00:10:02.402 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:02.402 This may also happen if the target rejected all inputs we tried so far 00:10:02.402 [2024-07-21 18:25:20.606042] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:10:02.660 [2024-07-21 18:25:20.650284] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:02.660 [2024-07-21 18:25:20.650324] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.225 NEW_FUNC[1/661]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:10:03.225 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:03.225 #28 NEW cov: 10961 ft: 10926 corp: 2/10b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:10:03.225 [2024-07-21 18:25:21.263292] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.225 [2024-07-21 18:25:21.263350] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.225 NEW_FUNC[1/1]: 0x1a51f70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:03.225 #29 NEW cov: 10992 ft: 15080 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:10:03.483 [2024-07-21 18:25:21.449761] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.483 [2024-07-21 18:25:21.449805] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.483 #30 NEW cov: 10992 ft: 15576 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:10:03.483 [2024-07-21 18:25:21.638615] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.483 [2024-07-21 18:25:21.638658] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.741 #36 NEW cov: 10992 ft: 16190 corp: 5/37b lim: 9 exec/s: 36 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:10:03.741 [2024-07-21 18:25:21.859558] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.741 [2024-07-21 18:25:21.859600] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:04.000 #42 NEW cov: 10992 ft: 16254 corp: 6/46b lim: 9 exec/s: 42 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:10:04.000 [2024-07-21 18:25:22.080983] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:04.000 [2024-07-21 18:25:22.081025] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:04.000 #43 NEW cov: 10992 ft: 16288 corp: 7/55b lim: 9 exec/s: 43 rss: 75Mb L: 9/9 MS: 1 ShuffleBytes- 00:10:04.258 [2024-07-21 18:25:22.297291] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:04.258 [2024-07-21 18:25:22.297333] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:04.258 #44 NEW cov: 10999 ft: 16538 corp: 8/64b lim: 9 exec/s: 44 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:10:04.516 [2024-07-21 18:25:22.515640] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:04.516 [2024-07-21 18:25:22.515681] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:04.516 #45 NEW cov: 10999 ft: 16634 corp: 9/73b lim: 9 exec/s: 22 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:10:04.516 #45 DONE cov: 10999 ft: 16634 corp: 9/73b lim: 9 exec/s: 22 rss: 75Mb 00:10:04.516 Done 45 runs in 2 second(s) 00:10:04.517 [2024-07-21 18:25:22.671487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:10:04.775 18:25:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:10:05.033 18:25:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:05.033 18:25:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:05.033 18:25:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:10:05.033 00:10:05.033 real 0m21.000s 00:10:05.033 user 0m28.290s 00:10:05.033 sys 0m2.363s 00:10:05.033 18:25:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.033 18:25:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:05.033 ************************************ 00:10:05.033 END TEST vfio_llvm_fuzz 00:10:05.033 ************************************ 00:10:05.033 18:25:23 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:10:05.033 18:25:23 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:10:05.033 00:10:05.033 real 1m28.948s 00:10:05.033 user 2m7.840s 00:10:05.033 sys 0m11.199s 00:10:05.033 18:25:23 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.033 18:25:23 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:05.033 ************************************ 00:10:05.033 END TEST llvm_fuzz 00:10:05.033 ************************************ 00:10:05.033 18:25:23 -- common/autotest_common.sh@1142 -- # return 0 00:10:05.033 18:25:23 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:10:05.033 18:25:23 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:10:05.033 18:25:23 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:10:05.033 18:25:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.033 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:05.033 18:25:23 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:10:05.033 18:25:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:10:05.033 18:25:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:10:05.033 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:09.216 INFO: APP EXITING 00:10:09.216 INFO: killing all VMs 00:10:09.216 INFO: killing vhost app 00:10:09.216 INFO: EXIT DONE 00:10:13.409 Waiting for block devices as requested 00:10:13.409 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:10:13.409 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:13.409 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:13.409 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:13.409 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:13.409 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:13.667 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:13.667 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:13.667 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:13.925 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:13.925 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:13.925 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:14.183 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:14.183 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:14.441 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:14.441 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:14.441 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:21.004 Cleaning 00:10:21.004 Removing: /dev/shm/spdk_tgt_trace.pid3799212 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3796750 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3797936 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3799212 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3799740 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3800476 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3800663 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3801482 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3801597 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3801917 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3802216 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3802545 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3802797 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3803157 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3803381 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3803591 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3803814 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3804401 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3807079 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3807459 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3807673 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3807845 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3808325 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3808408 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3808800 00:10:21.004 Removing: /var/run/dpdk/spdk_pid3808976 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3809190 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3809364 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3809568 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3809746 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3810198 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3810390 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3810590 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3810740 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3811037 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3811070 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3811279 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3811494 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3811687 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3811887 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3812081 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3812280 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3812477 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3812760 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3813025 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3813228 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3813419 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3813621 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3813812 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3814025 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3814306 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3814566 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3814760 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3814963 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3815161 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3815361 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3815569 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3815788 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3816047 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3816616 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3816957 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3817311 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3817798 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3818363 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3818915 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3819264 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3819624 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3819983 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3820339 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3820698 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3821062 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3821415 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3821777 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3822136 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3822492 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3822851 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3823209 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3823562 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3823924 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3824277 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3824640 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3824999 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3825355 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3825718 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3826160 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3826669 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3827032 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3827392 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3827755 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3828121 00:10:21.005 Removing: /var/run/dpdk/spdk_pid3828537 00:10:21.005 Clean 00:10:21.005 18:25:38 -- common/autotest_common.sh@1451 -- # return 0 00:10:21.005 18:25:38 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:10:21.005 18:25:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.005 18:25:38 -- common/autotest_common.sh@10 -- # set +x 00:10:21.005 18:25:38 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:10:21.005 18:25:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.005 18:25:38 -- common/autotest_common.sh@10 -- # set +x 00:10:21.005 18:25:38 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:21.005 18:25:38 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:10:21.005 18:25:38 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:10:21.005 18:25:38 -- spdk/autotest.sh@391 -- # hash lcov 00:10:21.005 18:25:38 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:10:21.005 18:25:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:21.005 18:25:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:21.005 18:25:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.005 18:25:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.005 18:25:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.005 18:25:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.005 18:25:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.005 18:25:38 -- paths/export.sh@5 -- $ export PATH 00:10:21.005 18:25:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.005 18:25:38 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:21.005 18:25:38 -- common/autobuild_common.sh@447 -- $ date +%s 00:10:21.005 18:25:38 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721579138.XXXXXX 00:10:21.005 18:25:38 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721579138.9yZVyY 00:10:21.005 18:25:38 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:10:21.005 18:25:38 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:10:21.005 18:25:38 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:10:21.005 18:25:38 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:21.005 18:25:38 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:21.005 18:25:38 -- common/autobuild_common.sh@463 -- $ get_config_params 00:10:21.005 18:25:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:10:21.005 18:25:38 -- common/autotest_common.sh@10 -- $ set +x 00:10:21.005 18:25:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:10:21.005 18:25:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:10:21.005 18:25:39 -- pm/common@17 -- $ local monitor 00:10:21.005 18:25:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.005 18:25:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.005 18:25:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.005 18:25:39 -- pm/common@21 -- $ date +%s 00:10:21.005 18:25:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.005 18:25:39 -- pm/common@21 -- $ date +%s 00:10:21.005 18:25:39 -- pm/common@25 -- $ sleep 1 00:10:21.005 18:25:39 -- pm/common@21 -- $ date +%s 00:10:21.005 18:25:39 -- pm/common@21 -- $ date +%s 00:10:21.005 18:25:39 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721579139 00:10:21.005 18:25:39 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721579139 00:10:21.005 18:25:39 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721579139 00:10:21.005 18:25:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721579139 00:10:21.005 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721579139_collect-vmstat.pm.log 00:10:21.005 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721579139_collect-cpu-load.pm.log 00:10:21.005 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721579139_collect-cpu-temp.pm.log 00:10:21.005 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721579139_collect-bmc-pm.bmc.pm.log 00:10:21.944 18:25:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:10:21.944 18:25:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:10:21.944 18:25:40 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:21.944 18:25:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:10:21.944 18:25:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:10:21.944 18:25:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:10:21.944 18:25:40 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:21.944 18:25:40 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:10:21.944 18:25:40 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:21.944 18:25:40 -- spdk/autopackage.sh@20 -- $ exit 0 00:10:21.944 18:25:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:10:21.944 18:25:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:21.944 18:25:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:21.944 18:25:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.944 18:25:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:21.944 18:25:40 -- pm/common@44 -- $ pid=3834642 00:10:21.944 18:25:40 -- pm/common@50 -- $ kill -TERM 3834642 00:10:21.944 18:25:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.944 18:25:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:21.944 18:25:40 -- pm/common@44 -- $ pid=3834645 00:10:21.944 18:25:40 -- pm/common@50 -- $ kill -TERM 3834645 00:10:21.944 18:25:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.944 18:25:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:21.944 18:25:40 -- pm/common@44 -- $ pid=3834648 00:10:21.944 18:25:40 -- pm/common@50 -- $ kill -TERM 3834648 00:10:21.944 18:25:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.944 18:25:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:21.944 18:25:40 -- pm/common@44 -- $ pid=3834681 00:10:21.944 18:25:40 -- pm/common@50 -- $ sudo -E kill -TERM 3834681 00:10:21.944 + [[ -n 3686146 ]] 00:10:21.944 + sudo kill 3686146 00:10:21.954 [Pipeline] } 00:10:21.973 [Pipeline] // stage 00:10:21.979 [Pipeline] } 00:10:21.999 [Pipeline] // timeout 00:10:22.005 [Pipeline] } 00:10:22.023 [Pipeline] // catchError 00:10:22.029 [Pipeline] } 00:10:22.048 [Pipeline] // wrap 00:10:22.055 [Pipeline] } 00:10:22.071 [Pipeline] // catchError 00:10:22.081 [Pipeline] stage 00:10:22.084 [Pipeline] { (Epilogue) 00:10:22.099 [Pipeline] catchError 00:10:22.101 [Pipeline] { 00:10:22.114 [Pipeline] echo 00:10:22.116 Cleanup processes 00:10:22.122 [Pipeline] sh 00:10:22.405 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:22.405 3746103 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721578706 00:10:22.405 3746138 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721578706 00:10:22.405 3834812 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:10:22.405 3835466 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:22.426 [Pipeline] sh 00:10:22.725 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:22.725 ++ grep -v 'sudo pgrep' 00:10:22.725 ++ awk '{print $1}' 00:10:22.725 + sudo kill -9 3746103 3746138 3834812 00:10:22.752 [Pipeline] sh 00:10:23.033 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:24.997 [Pipeline] sh 00:10:25.274 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:25.274 Artifacts sizes are good 00:10:25.286 [Pipeline] archiveArtifacts 00:10:25.291 Archiving artifacts 00:10:25.367 [Pipeline] sh 00:10:25.648 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:25.662 [Pipeline] cleanWs 00:10:25.671 [WS-CLEANUP] Deleting project workspace... 00:10:25.671 [WS-CLEANUP] Deferred wipeout is used... 00:10:25.676 [WS-CLEANUP] done 00:10:25.679 [Pipeline] } 00:10:25.699 [Pipeline] // catchError 00:10:25.711 [Pipeline] sh 00:10:25.990 + logger -p user.info -t JENKINS-CI 00:10:25.998 [Pipeline] } 00:10:26.013 [Pipeline] // stage 00:10:26.019 [Pipeline] } 00:10:26.034 [Pipeline] // node 00:10:26.039 [Pipeline] End of Pipeline 00:10:26.065 Finished: SUCCESS